Meta to label all AI generated images
Do images like the one below make you think it's OK to hug a polar bear (don't do it because it will probably rip your face off). Or will you buy a specific band-aid product because Tom Cruise uses them? (It's not actually him endorsing band-aids) Or are you amazed that someone has visibly lost a lot of bodyweight after drinking magic tea for a week?
While you'd think that the majority of people have common sense, it appears to be that common sense is not really that common. So in an attempt to clamp down on visual tomfoolery – Meta, the owner of Facebook, Instagram, and Threads, has announced it will label AI-generated images in the coming months.
In a statement, the company says it is currently building the tools to detect AI-generated images, and the labels will be applied in all languages supported on its platforms.
"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies, the statement says. "People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology."
The company noted it is hoping to rollout the feature soon as a number of 'important elections' are set to take place around the world.
“During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward,” the statement said.
Currently, images created with Meta's own AI tool MetaAI already include an “Imagined with AI” label that appears when published, but until now this hasn't extended to AI-generated work created with other tools.
According to Meta, the new warnings would come up “when we can detect industry standard indicators that they are AI-generated.” This is something Meta says it is working with 'industry partners' to develop.
The company says it is also in the process of building tools that can identify invisible markers, such as the watermarks embedded in many AI creations at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards that many AI companies adhere to.
"So we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools," Meta says.
However, Meta admits it is still limited with what it can do with video and audio.
"While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies," the company says.
"While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it."
However, as Meta acknowledges, it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers.