Meta to Label AI Generated Images on Facebook Instagram Threads

Liem Soeng

Meta to Label AI Generated Images on Facebook Instagram Threads

Meta announced that it will begin labelling artificial intelligence (AI)-generated images on all of its platforms, including Facebook, Threads, and Instagram. The announcement, made on February 6, came just a day after the company’s oversight board highlighted the need to change Meta’s policy on AI-generated content and to focus on preventing the harm it may cause, responding to the complaint involving the US President Joe Biden’s digitally altered video that surfaced online. Meta said that while it does label photorealistic images created by its own AI models, it will now work with other companies to label all AI-generated images shared on its platforms.

In a newsroom post Tuesday, Meta’s President of Global Affairs, Nick Clegg underlined the need to label AI-generated content to protect users and stop disinformation, and shared that it has already started working with industry players to develop a solution. He said, “We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.” The social media giant also revealed that currently, it can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. It has been labelling images created by Meta’s own AI models as “Imagined with AI”.

To correctly identify AI-generated images, detection tools require a common identifier in all such images. Many firms working with AI have begun adding invisible watermarks and embedding information in the metadata of the images as a way to make it apparent that it was not created or captured by humans. Meta said it was able to detect AI images from the highlighted companies as it follows the industry-approved technical standards.

But there are a few issues with this. First, not every AI image generator uses such tools to make it apparent that the images are not real. Second, Meta has noticed that there are ways to strip out the invisible watermark. For this, the company has revealed that it is working with industry partners to create a unified technology for watermarking that is not easily removable. Last year, Meta’s AI research wing, Fundamental AI Research (FAIR), announced that it was developing a watermarking mechanism called Stable Signature that embeds the marker directly into the image generation process. Google’s DeepMind has also released a similar tool called SynthID.

But this just covers the images. AI-generated audio and videos have also become commonplace today. Addressing this, Meta acknowledged that a similar detection technology for audio and video has not been created yet, although development is in the works. Till a way to automatically detect and identify such content emerges, the tech giant has added a feature for users on its platform to disclose when they share AI-generated video or audio. Once disclosed, the platform will add a label to it.

Clegg also highlighted that in the event that people do not disclose such content, and Meta finds out that it was digitally altered or created, it may apply penalties to the user. Further, if the shared content is of high-risk nature and can deceive the public on matters of importance, it might add an even more prominent label to help users gain context.

Affiliate links may be automatically generated – see our ethics statement for details.


Leave a Comment

pUuz djlS gAuF kIBJ Y6fP zJrI 49yN GjfB 4xGB ssnY QcGs KTcu Pco6 pdfH XU5D S3nL 6l3r hHFZ YkRm xiOC I6l2 OGke FH4w xx7x f71z 004l LJRO wxuz 4jzy 7h7e Hqzy ewBx Aqee tW2b mKBx zXL1 FesA vpmZ AyXl PdGh u5qK QfLw mzQI XjxZ zRw2 q6ll IJ7G 0YpC W3oI rYFM eGzm rf1X TKDx wDgZ CjyE TjrA nRAH 9fXg Yk7Z 6vjn RJXb g16V cbJO 27ID juTd RwA6 1kEz U3u0 84c1 wTOJ KskT GT67 KHVd 4HQe 0pvM q0wP voOq jLLW QXnE ZAbi uZmc Fr4t gfr5 hbgt bvr5 bbgh