Peters says the creators of the images – and everyone who appears in them – consented to their art being used in the AI model. Getty also offers a Spotify-style compensation model to creatives for use of their work.
The fact that creatives are being paid this way is good news, says Jia Wang, assistant professor at the University of Durham in the United Kingdom, specializing in AI and intellectual property law. But it could be difficult to determine which images were used in the AI-generated images to determine who should be paid for what, she adds.
Getty’s model is only trained on the company’s creative content, so it doesn’t include images of real people or places that could be manipulated into deepfake images.
“The service doesn’t know who the pope is or what Balenciaga is, and they can’t combine the two. He doesn’t know what the Pentagon is, and you won’t be able to blow it up,” Peters says, referring to recent viral images created by generative AI models.
As an example, Peters types a prompt for the President of the United States, and the AI model generates images of men and women of different ethnicities in costumes and in front of the American flag.
Tech companies argue that AI models are complex and cannot be built without copyrighted content and point out that artists can opt out of AI models, but Peters calls these arguments “bullshit.”
“I think there are some really sincere people who really think about this,” he says. “But I also think there are hooligans who just want to get in on this gold rush.”