Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Information Today > April 2023

Back Index Forward
SUBSCRIBE NOW!
Information Today
Vol. 40 No. 3 — April 2023
AI ETHICIST
Insights on Content

Artificial Intelligence Versus Artists?
by Kashyap Kompella

Generative AI is seeing a burst of innovation, interest, and investments. This type of AI takes simple text in everyday language as input and produces output in different forms such as text, images, videos, audios, and 3D models. ChatGPT—currently creating a lot of buzz (and controversy)—is an example of a text-to-text generative AI tool. GitHub Copilot is a tool that can produce software code based on natural language inputs. DALL•E 2, Midjourney, and Stable Diffusion are examples of text-to-image AI. With the right text inputs or prompts, the image generators can produce impressive images. In fact, one such image even won first prize at the Colorado State Fair’s art competition last year. Artists using software tools as aids in their creative pursuits is nothing new, but generative AI tools are challenging some of our notions about art—starting with whether such computer-generated images can really be called art.

ART AND ARTIST

For many, the art and the artist are inseparable; the art piece is a unique expression of the artist’s worldview, based on their lived experiences. Generative AI is challenging such views. It generates images based on the text labels and other metadata associated with the millions of images in the training corpus. After all, a lot of what the AI tools are doing is figuring out which text labels correspond to which images and styles of painting and spewing out a remixed version.

Now, even those without any training in art can create (passable) images. In this context, there’s a swirl of practical and philosophical questions that generative AI raises. What does this mean for the future of artists and even the creative professions, as text-to-music and text-to-video synthesizers are not too far behind? Who is responsible for the problems caused when it becomes easy to generate harmful content? What, if any, guardrails are needed for the responsible use and sale of generative AI? Is it even possible to build guardrails that can’t be easily broken?  

LEGALITY OF USAGE

There are several ethical and complex legal issues surrounding generative AI, and, not surprisingly, there are debates about the legality of the input data usage in training the AI models. Popular image generators (such as Stable Diffusion and Midjourney) have been trained on datasets with billions of images scraped from the web. Can the AI vendors scrape publicly available images to train their models? Is the consent of the artists required? Do the content owners and artists have to be compensated when their works are used for training? Is it practically feasible to seek consent from and compensate millions of creators?

In the U.S., the courts have generally considered scraping of web content to be OK (note: This is not legal advice!) and have held that it falls within the purview of the Fair Use doctrine of the copyright law. So, can it be argued that training generative AI models using publicly available images constitutes fair use?

Another criterion that helps decide if there is any copyright infringement is whether the work output is transformative. This leads to a conversation about what, exactly, is happening under the hood of the AI models. Are they to be considered just image collage tools storing the images in their database, albeit in a different representation compared to the original? Does this constitute significant transformation? What about when exact replicas of existing images are generated? Do style transfer applications (i.e., generating images in the particular style of an artist without involving the artist at all in the process) dilute the artist’s brand and reputation and rob them of their right to earn a livelihood? On the flip side of that, don’t so many artists imitate the great masters?

IN THE COURTS

These are not academic or hypothetical questions; they actually are being litigated in the courts. Stock photo vendor Getty Images is suing Stability AI for copyright infringement, claiming that its copyrighted images were used without permission or compensation. Although it’s not an image generator, there is also a suit against GitHub Copilot for copyright violations. Similarly, there is another suit against image-generator app vendors Stability AI, Midjourney, and DeviantArt.

Laws and regulations—and even our cultural norms and expectations—can lag behind technology. As AI proliferates, we must codify the new rules of engagement. Billions of dollars are being invested into generative AI, and the outcomes of these litigations will have a significant bearing on the future trajectory of AI.

Linda PophalKASHYAP KOMPELLA is an award-winning industry analyst, a bestselling author, an educator, and an AI advisor to leading companies and startups in the U.S., Europe, and the Asia-Pacific region. Find out more on LinkedIn (linkedin.com/in/kashyapkompella). Send your comments about this column to itletters@infotoday.com or tweet us (@ITINewsBreaks).