Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Information Today > May 2023

Back Index Forward
SUBSCRIBE NOW!
Information Today
Vol. 40 No. 4 — May 2023
LEGAL ISSUES
The Intelligence of Artificial Intelligence
by George H. Pike

The Warhol vs Goldsmith lawsuit revolves around a photograph taken by Lynn Goldsmith of the musician Prince in 1981. The photograph was taken for a magazine cover, and Goldsmith licensed the photograph for use in various publications over the years.

In 1984, Andy Warhol created a series of artworks based on the photograph without obtaining Goldsmith’s permission. The artworks were created using various techniques, including silk-screening, and featured different colors and backgrounds.

SO FAR, SO GOOD …

After Warhol’s death in 1987, his estate continued to produce and sell the Prince artworks. In 2016, Goldsmith filed a lawsuit against the Andy Warhol Foundation for the Visual Arts, arguing that Warhol’s use of her photograph constituted copyright infringement.

The foundation argued that Warhol’s works were transformative and therefore fell under fair use. They also claimed that Goldsmith had waited too long to bring her lawsuit and that the statute of limitations had expired.

NOT SO MUCH …

However, in March 2019, a federal judge ruled in favor of Goldsmith, stating that Warhol’s use of her photograph did not constitute fair use. The judge also rejected the foundation’s argument about the statute of limitations and allowed the case to proceed.

The case was eventually settled out of court in 2020, with the foundation agreeing to pay Goldsmith an undisclosed sum and to acknowledge her as the photographer of the original image. The settlement also included a joint statement from both parties acknowledging the importance of copyright protection for artists and photographers.

HOLD IT …

Surprise! The first six paragraphs of this column were written by ChatGPT in response to my prompt, “Discuss the Warhol vs. Goldsmith lawsuit.” The case involves The Andy Warhol Foundation for the Visual Arts (the legal owner of the works of the late Andy Warhol), celebrity photographer Lynn Goldsmith, and a series of Warhol silkscreens of rock star Prince, based on a photograph taken by Goldsmith. The case raises questions about the extent of the transformative test within copyright’s Fair Use doctrine, which allows new creative expressions to be developed from existing creative works. The case is currently before the U.S. Supreme Court.

A decision in the case has not been made as of this writing, but one may be released by the time this column is published. However, this column is less about the Warhol Foundation v. Goldsmith case than it is about OpenAI’s ChatGPT chatbot, which launched in November 2022 and has taken the writing, academic, and creative worlds by storm ever since. In fewer than 2 months, more than 100 million users had been registered.

HUMAN TRAINING

Like other AI platforms, ChatGPT is built off of a massive database of input, but it was also trained both through software and human feedback. The human training was critical to creating the conversationalist model of communication that ChatGPT uses. Human training continues as ChatGPT further develops, including the ability for users to provide feedback through upvoting or downvoting ChatGPT’s responses and offering narrative feedback.

Since its release, a number of communities have both embraced and expressed concern about ChatGPT. At a recent faculty meeting at my school, the consensus was that ChatGPT could probably write a decent (“maybe a B”) law school exam answer. Outside of the exam context, is ChatGPT a tool that can assist students in producing the wide range of paperwork that accompanies a law school and practice career, or is it a crutch that may ultimately replace lawyers, as AI chatbots handle the practice of law? So far, I’m not quite buying into its promise yet. The first four paragraphs of this column are reasonably accurate, if not detailed. I might have chosen other language or used other pieces of the story to highlight, but I would grade ChatGPT’s response a B.

However, the last two paragraphs get an F. There was no decision in March 2019; it occurred in July and was in favor of Warhol, finding that Warhol’s use of the photographs did constitute fair use. There was also no out-of-court settlement; Goldsmith appealed the trial court’s decision and got the appeals court to reverse the finding of fair use in 2021. The Andy Warhol Foundation for the Visual Arts appealed to the U.S. Supreme Court, which is where we find ourselves now.

AN AI CONVERSATION

Similar to my experiment, a colleague of mine at the University of Idaho Law School (my law school alma mater) recently posted a “conversation” she had with ChatGPT in which she tried to find out why it had given one of her colleagues an erroneous legal citation. The citation provided didn’t exist, and she asked, “How did you come up with an erroneous citation?” ChatGPT replied with a different citation that was equally wrong and “apologized” for its errors. The exchange went back and forth for several pages of text, each time generating incorrect or nonexistent citations, with ChatGPT apologizing for “confusion,” “inconvenience,” and “frustration.”

But a separate prompt that I submitted to ChatGPT asking for the best way to protect trade secrets resulted in a solid B to B+ answer across the board. Would it have met law school or lawyer standards? No, but that’s not what ChatGPT claims to do. It gave a solid—moderately broad, but generally correct—answer, at least enough of an answer take a researcher to the next level.

LIMITATIONS

The quip that I want to throw out here is that for an artificially intelligent system, ChatGPT isn’t very intelligent if it can’t get the facts or authorities right. But the issue goes further than that. ChatGPT acknowledges its limitations, that it, as its website states, “[m]ay occasionally generate incorrect information [and] produce harmful instructions or biased content” and has “limited knowledge of world … events after 2021.” It’s also relatively new, having just been out a few months. And in responses not requiring specific facts or authorities, it seems to do a pretty good job.

All of the issues that my colleague and I encountered could be considered to be within the listed limitations, but shouldn’t it go deeper than that? For all of the hype and the legitimate capability of ChatGPT and similar AI chatbots, they don’t strike me as models that can support the concept of artificial intelligence. ChatGPT may be able to “create,” but I wonder about its ability to recognize and critique its creations. It can admit its mistakes, but it does not appear to recognize them. In the absence of these functions, it’s fair to ask whether ChatGPT’s output constitutes actual intelligence.


George H. PikeGeorge H. Pike is the director of the Pritzker Legal Research Center at Northwestern University School of Law. Send your comments about this article to itletters@infotoday.com or tweet us (@ITINewsBreaks)