However, the idea of AI as a creator of art is less than settled. The use of automated processes to produce static and moving images on demand has caused a significant degree of backlash from the creative industries, particularly for the program’s need to be ‘trained’ on human artists, who often do not stand to gain from the technology’s success. In this month’s deep dive, we take a look at the brewing conflict in the art world and what it means for the field of IP law.
What is Behind the Technology?
AI has been a staple of futurists’ visions for the evolution of the working world for decades. Whether the form it takes is the predictive text function of a smartphone or an ‘autopilot’ function in a smart car, companies the world over have dedicated R&D wings tasked with the creation and implementation of AI in these products for a more efficient, less taxing user experience.
When it comes to the newly emergent software for the creation of images, the driving force is ‘generative’ AI. This simply refers to a set of algorithms that create an output. ‘Outputs’ can range from plain text to illustrations, captions, essays, emails and computer code, among many other content types. This form of algorithm includes the likes of OpenAI’s ChatGPT, a tool unveiled in 2022 that shocked the world – and the legal sector, as in-house counsel wondered at its potential to create legal documents on demand.
In the art world, generative AI image creators of note include Midjourney, DALL-E 2 by OpenAI and Dream by WOMBO. All are simple to operate, often requiring only a text prompt of varying specificity in order to produce a selection of images on demand. This ease of use has been one of the most important factors in the explosion of popularity that AI art experienced in the latter half of 2022 and which continues today.
Where Is The Conflict?
The above programs did not become able to create art spontaneously. In each instance, their algorithms were ‘trained’ on millions of pre-existing images, establishing patterns that can later be reproduced should a user input an associated keyword. These images are ‘scraped’ from the internet and public data sets such as free-to-use art sites, often without the original creator’s consent or credit.
In many cases, generative AI can even be used to directly emulate the style of an established artist should the user request it. The backlash to this from some quarters has been extreme. Popular internet content creators Corridor Digital drew condemnation after producing an AI-created video trained on the style of animated series Vampire Hunter D, which some viewers branded as “theft” of the source material.
Ease of use has been one of the most important factors in the explosion of popularity that AI art experienced in the latter half of 2022 and which continues today.
Others have characterised the emergence of trained AI as an encroachment of corporate management upon the art world. This was the stance of Swedish painter Simon Stålenhag, whose work was mimicked by a Midjourney user in a series of viral Tweets that have since been deleted. "It basically takes lifetimes of work by artists, without consent, and uses that data as the core ingredient in a new type of pastry that it can sell at a profit with the sole aim of enriching a bunch of yacht owners,” he said of generative AI.
What Have Been the Legal Ramifications?
Popular criticisms of AI art tend to focus on its quality or its derivation from existing sources. However, potential IP violations on the part of generative AI creators have the potential to result in far greater consequences for the nascent technology, as has been demonstrated by stock photography company Getty Images and its lawsuit against tech start-up Stability AI. In the suit, Getty Images claimed that Stability AI copied more than 12 million images from its database “without permission ... or compensation ... as part of its efforts to build a competing business”. In training its Stable Diffusion generative tool on the images, Getty claims, the firm infringed on both its copyright and trade mark protections.
A similar lawsuit has been filed by a trio of American visual artists against Stability AI, DeviantArt and Midjourney, alleging that the Stable Diffusion tool used by all three companies was trained on their copyrighted images. Stability AI have been sanguine about these lawsuits so far, commenting in a statement to the media: “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”
Observers in the legal sector have commented that Getty’s case against Stability AI is on sturdier footing than the artists’ class action lawsuit, but both cases exist in territory that has yet to be tested in court. Whatever the outcome, these cases will likely set a precedent for a good deal of legal action involving generative AI yet to come.
Where Will This Lead?
Currently, algorithmically generated art is far from perfect. Many generated images – especially when aiming to replicate detailed anatomy – are rife with uncanny errors that prove its AI authorship. However, the technology in use is constantly evolving, and projections are emerging that its growing ease and accessibility will reduce demand for human-created art. If these prove to be accurate, the new wave of art in business will nonetheless be sourced in the first instance from human artists.
[ymal]
Unlike the 2021 NFT bubble, AI development is more than a passing tech trend. Microsoft, Meta and Google have all begun investing heavily in generative AI technology, and it is highly likely that further clashes between IP holders and AI creators will follow unless a method of training these new AI through means other than image-scraping is pioneered. The IP world would be wise to watch the outcomes of the Stability AI lawsuits closely.