top of page
Writer's pictureSarah Ruivivar

Meta's AI Policy: More Labels, Fewer Takedowns

Image credits: Farhat Altaf / Unsplash

Meta, the social media titan, is shaking up its rules on AI-generated content and manipulated media.


This move comes hot on the heels of criticism from its Oversight Board. Starting next month, Meta will be rolling out a "Made with AI" badge for deepfakes and providing additional context for other high-risk manipulated content.


This could mean more labels on potentially misleading content - a crucial step in an election-packed year. However, the labels will only be applied to deepfakes with "industry standard AI image indicators" or where the uploader has disclosed it's AI-generated. So, content falling outside these parameters might slip through the net unlabeled.


Interestingly, this policy shift could lead to more AI-generated content and manipulated media staying put on Meta's platforms. Meta is leaning towards "providing transparency and additional context" over removing manipulated media, citing free speech concerns. So, the new playbook for Meta platforms like Facebook and Instagram seems to be: more labels, fewer takedowns.


 

Want to learn more about AI's impact on the world in general and property in particular? Join us on our next Webinar! Click here to register

 

Meta plans to stop removing content solely based on its current manipulated video policy in July. This change might be a response to increasing legal demands around content moderation and systemic risk, such as the European Union's Digital Services Act.


Meta's advisory board, which operates at arm's length despite being funded by the tech giant, has been critical of Meta's approach to AI-generated content. In response, Meta is amending its policies based on the board's feedback. The board had previously criticised Meta's policy on manipulated media as "incoherent," as it only applied to video created through AI, leaving other fake content unchecked.


Meta seems to have taken this criticism onboard. It's now expanding its policy to cover other forms of realistic AI-generated content like audio and photos. The company is also working on developing common technical standards for identifying AI content, including video and audio, in collaboration with other industry players.


Under the new policy, Meta won't remove manipulated content unless it violates other policies. Instead, it may add "informational labels and context" in scenarios of high public interest. The company also plans to continue working with a network of independent fact-checkers to identify risks related to manipulated content.


So, as we navigate the ever-evolving landscape of AI and social media, it seems the focus is shifting towards transparency, context, and preserving freedom of expression. It's a brave new world, indeed!



 

Want to learn more about AI's impact on the world in general and property in particular? Join us on our next Webinar! Click here to register

 


Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai

4 views0 comments

Recent Posts

See All

Comments


bottom of page