Meta to add AI generated label to images created with OpenAI, Midjourney and other tools

New York CNN  — 

Meta says it’s working to identify and label AI-generated images shared on its platforms that were created by third-party tools, as the company prepares for the 2024 election season amid a proliferation of artificial intelligence tools that threaten to muddy the information ecosystem.

In the coming months, Meta will start adding “AI generated” labels to images created by tools from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock, Meta Global Affairs President Nick Clegg said in a blog post Tuesday. Meta already applies a similar, “imagined with AI” label to photorealistic images created with its own AI generator tool.

Clegg said Meta is working with other leading firms developing artificial intelligence tools to implement common technical standards — essentially, certain invisible metadata or watermarks stored within images — that will allow its systems to identify AI-generated images made with their tools.

Meta’s labels will roll out across Facebook, Instagram and Threads in multiple languages.

Meta’s announcement comes as online information experts, lawmakers and even some tech executives raise alarms that new AI tools capable of producing realistic images — paired with social media’s ability to rapidly disseminate content — risk spreading false information that could mislead voters ahead of 2024 elections in the United States and dozens of other countries.

A hand holding a mobile phone with the logo of Facebook on its screen. Nikos Pekiaridis/NurPhoto/Getty Images

It also comes a day after Meta’s own Oversight Board slammed the company’s “incoherent” manipulated media policy in a decision related to an altered video of US President Joe Biden. Biden’s presidential campaign on Monday called the policy “nonsensical and dangerous,” in a statement to CNN responding to the Oversight Board’s findings. Meta said Monday it would review the board’s recommendations and respond within 60 days.

On Tuesday, Clegg acknowledged the importance for users of clearly labeling AI-generated imagery.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology,” Clegg said in the post.

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” he said. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

The new, industry-standard markers that will let Meta label AI-generated images will not yet be included in videos and audio generated by artificial intelligence.

For now, Meta says it is implementing a feature that will let users identify when the video or audio content they’re sharing was generated by AI. Users will be required to apply the disclosure for realistic video or audio that was “digitally created or altered” and may face penalties if they don’t, Clegg said.

He added that if a digitally created or altered image, video or sound “creates a particularly high risk of materially deceiving the public on a matter of importance,” the company may add a more prominent label.

Meta is also working to prevent users from stripping out the invisible watermarks from AI-generated images, Clegg said.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards,” he said. “It’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.”

Separately, Meta also announced Tuesday an expansion of an anti-sextortion tool it has backed from the National Center for Missing & Exploited Children called “Take it Down.” The tool provides teens or parents the ability to securely create a unique identifier for intimate images they’re worried may be spreading online, which makes it possible for platforms like Meta to easily identify and remove the images from their platforms.

“Take it Down” was launched last year in English and Spanish, and will now expand to 25 languages and additional countries, Meta said in a blog post.

The “Take it Down” announcement comes after Meta CEO Mark Zuckerberg, along with fellow social media company leaders, was grilled in a Senate hearing last week about the company’s protections for young users.

ncG1vNJzZmivp6x7pLrNZ5qopV9nfXOAjmlpaGhmZMGmr8dopJ6skWKuqnnGnqWeqpGpsqV5yKaYoJ2jYrmirsSlZK2gmaexbrzAq6uyZaSkvK2%2FjqKlnZ2oY7W1ucs%3D