Startup debuts new AI models trained on Getty Images and other content as copyright concerns loom
Generative AI may still be in a legal limbo, but one startup’s platform aims to solve some key challenges facing companies, artists and researchers when it comes AI-generated visual content.
Bria AI, an Israel-based AI image generator, has created new foundation AI models trained with licensed content from stock image powerhouse Getty Images and other sources such as content marketplaces like Alamy and Envato. While giants like OpenAI, Google and Microsoft face legal battles over whether their AI platforms were created with permissible content, Bria says it’s taking a “responsible” approach by only using permissible content from the start. Getty Images, which became a minority investor in Bria last fall, collaborated on the licensing deal, but the AI model was trained by Bria as a proprietary product of the startup. Revenue from the text-to-image platform and other tools is distributed equally between Bria and the various data owners, content generators and creators.
Along with the new foundation models released today, Bria also developed an attribution model that can help researchers see how data sets — in this case specific photos — influence an AI model. However, it also plans to use that same technology to compensate creators when the AI platform generates images based on their photos. According to Bria co-founder and CEO Yair Adato, the payment model is similar to Spotify’s, which doles out nano payments to artists based on music streams. He also noted Bria mitigates harmful content by blocking users from creating images unless the images already exist in the data set.
“That attribution model is per impact,” Adato said. “It’s solving the copyright problem and the explainability problem because it’s [showing AI and human images] back-to-back. It’s solving the privacy problem… We solved the problem from the roots.”
To show how the Attribution Simulator works, Adato gave Digiday a brief demo by entering the prompt “ocean sun,” which generated an image and served up 621 visuals that would be rewarded. He then generated a rose by entering “red rose,” which showed even more training images, but typing “blue rose” narrowed the compensation down to just 100. Adato did not say exactly how much artists would be compensated for their images. However, the company likened the structure to an agent fee system with the AI model acting as an agent for the content creator through the platform.
How generative AI is prompting more legal challenges
The updates come as AI-generated content comes under increased legal and regulatory scrutiny. On Tuesday, a second class-action lawsuit filed against OpenAI and Microsoft alleged the companies broke privacy laws. On Thursday, Microsoft announced it would legally defend commercial users of its Copilot product if customers are sued for copyright infringement related to their use of Microsoft’s AI tools. And just last week, the U.S. Copyright Office published a lengthy notice in the federal register opening a public commentary period for anyone that wants to address AI’s impact on intellectual property.
All this legal gray area has left many marketers worried about the risks while other companies like Adobe and Shutterstock — both of which have been building out new generative AI tools — seek to quell fears by offering indemnity to users if they’re hit with lawsuits regarding content from their platforms.
The evolving landscape of international laws around generative AI adds another layer of complexity. Edward Klaris, a managing partner at Klaris Law, said Bria’s solution “makes sense from an ethical kind of view, a legal point of view, an international point of view and a business point of view.”
“People are going to want to be like Apple after Napster and be compliant and not wild-westing the situation,” Klaris said.
Bria’s partnership with Getty Images is noteworthy for another reason. In February, the stock image giant filed a lawsuit against Stability AI — the startup behind Stable Diffusion — after the popular AI platform generated images showing Getty’s well known watermark. Then in June, Getty filed a legal injunction asking London’s high court to stop Stability from selling its platform in the United Kingdom.
Getty Images declined Digiday’s request for an interview about its partnership with Bria and its other actions related to generative AI. (Getty Images also hasn’t disclosed the total of its investment in Bria, but a “purchase of a minority investment” reported during its third-quarter 2022 earnings was listed as $2 million.)
Even as Bria, Adobe and Shutterstock look for ways to pay artists in the AI era, others say there’s a more macro-level problem that also needs to be solved: How should society pay for ideas that aren’t always easy to value? That question was posed by George Strakhov, chief strategy officer of DDB EMEA, who helps lead DDB’s AI efforts. He applauded Bria’s efforts as a “great development,” but also said it’s a “bit of a shim.”
“If you zoom out a bit, the real problem is not actually the AI model situation,” Strakhov said. “If you grow apples or if you do live music, it’s easy because I can buy this from you and you don’t have it anymore and the transaction is clear. If you’ve done something wonderful and then humans or AIs can sort of take it and you don’t have less of it, then we don’t have a good model for compensating that.”
More in Media
Creators are left wanting more from Spotify’s push to video
The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.
Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway
Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.
Incoming teen social media ban in Australia puts focus on creator impact and targeting practices
The restriction goes into effect in 2025, but some see it as potentially setting a precedent for similar legislation in other countries.