The technology behemoth Meta has received approval from the European Union’s data authority to develop its artificial intelligence models utilizing openly shared content from its social media platforms.
Posts and comments from adult users across Meta’s array of platforms, which include Facebook, Instagram, WhatsApp, and Messenger, along with inquiries directed at the company’s AI assistant, will now serve to enhance its AI models, Meta stated in a blog entry dated April 14.
The organization indicated that it’s “essential for our generative AI models to be trained on a wide array of data to grasp the remarkable and varied nuances and intricacies that define European communities.”
Meta has obtained authorization from data regulators in the EU to train its AI models using publicly shared content on social platforms. Source: Meta
“This encompasses everything from dialects and informal language to localized knowledge and the unique ways various nations employ humor and sarcasm in our products,” it mentioned.
Nevertheless, individuals’ private conversations with friends and family, as well as public data from EU users under 18 years of age, remain restricted, according to Meta.
Users can also opt-out of their data being utilized for AI training through a form that Meta asserts will be distributed within the app, via email, and is “easy to locate, read, and use.”
EU regulators halted the AI training endeavors of tech firms
In July of last year, Meta postponed training its AI using public content from its platforms after the privacy advocacy organization None of Your Business filed complaints across 11 European nations, prompting the Irish Data Protection Commission (IDPC) to request a pause in the rollout until a review took place.
The complaints alleged that Meta’s changes to its privacy policy would have permitted the firm to utilize years of personal posts, private images, and online tracking data for training its AI products.
Meta claims it has now received confirmation from the EU’s data protection regulator, the European Data Protection Commission, that its AI training methodology complies with legal requirements, and the organization continues to engage “productively with the IDPC.” “This is how we have been training our generative AI models for other regions since their inception,” Meta remarked. “We’re following the lead of other firms, such as Google and OpenAI, both of which have previously used data from European users to train their AI models.” Related: EU could impose a fine of $1B on Elon Musk’s X for unlawful content and misinformation An Irish data regulator initiated a cross-border investigation into Google Ireland Limited last September to ascertain whether the tech giant adhered to EU data protection regulations while developing its AI models. X endured similar scrutiny, and agreed to cease using personal data from users in the EU and the European Economic Area last September. Previously, X had employed this data to train its AI chatbot Grok. The EU initiated its AI Act in August 2024, establishing a regulatory framework for the technology that included provisions on data quality, security, and privacy. Magazine: XRP success leaves Ripple a ‘bad actor’ with no established legal precedent in crypto