The short answer would be yes, at least according to Rod Sims, the former chairman of the Australian Competition and Consumer Commission (ACCC).
Sims is one of the main brains behind the news media bargaining code that saw tech giants Google and Facebook pay online media publishers for accessing their content.
Sims’ view comes from the fact that generative artificial intelligence (AI) tools such as Google’s Bard and OpenAI’s (supported by Microsoft) ChatGPT were trained from data harvested online.
These data sets most likely include newsarticles that are readily available online but are nevertheless written by journalists and paid for by media companies.
“If media companies are having their content out in the public but not getting compensated for it, you are under-provisioning for journalism and that’s bad for society. We don’t want anything that sees journalism getting unrewarded for what they’re doing,” Sims said in an article by the Australian Financial Review.
For now, the news media bargaining code that Sims spearheaded doesn’t include AI companies. However, he said it shouldn’t be too difficult to renegotiate the deals as the bargaining code includes a provision that allows new companies to be added.
“These [AI] players weren’t around when we did the news media bargaining code, but I think it’s a big issue that will arise firstly as the deals in Australia get renegotiated,” Sims said.
“I don’t think that’s going to be that hard a hurdle to jump. There’ll only be two or three of these companies doing ChatGPT things, so I think they can easily be included,” Sims said.
Ongoing lawsuits
Sims is not the first person to have voiced these concerns.
Digital artists are concerned about Midjourney and Dall-E – two AI-powered tools capable of generating images from textual descriptions.
Like ChatGPT, both tools were trained on a massive dataset of images and their corresponding textual descriptions, allowing them to learn how to generate novel images based on natural language prompts.
Dall-E is owned by OpenAI, the same company that runs ChatGPT.
Earlier this year, three artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class-action lawsuit against Midjourney, art website DeviantArt, and Stability AI, the company that created Stability Diffusion, a deep learning, text-to-image model released in 2022.
The lawsuit described Stability Diffusion as “a 21st-century collage tool that remixes the copyrighted works of millions.”
“Stable Diffusion contains unauthorized copies of millions—and possibly billions—of copyrighted images. These copies were made without the knowledge or consent of the artists,” said Matthew Butterick, a visual artist and lawyer involved in the case.
Meanwhile, Getty images, the stock photo group also started a legal action in the UK courts. The group owns 135 million copyrighted images in its database, which they claim Stability AI used to train their AI tool.
“Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights,” Getty said in a press release.
“Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests,” they said.
The outcome of these results will impact the future of how generative AI gets used in the future. But for now, advocates are calling for laws that protect the rights of content creators.
“When making determinations about AI policies, it is vital for policymakers and stakeholders to understand that any new laws and policies relating to AI must be based on a foundation that preserves the integrity of the rights of copyright owners and their licensing markets, said the Copyright Alliance, a group representing the interests of content creators across all media.
“The interests of those using copyrighted materials to train AI must not be prioritized over the rights and interests of creators and copyright owners,” they said.