Meta Declines to Endorse EU's Voluntary AI Code of Conduct, Raising Regulatory Concerns

📷 Image source: techcrunch.com
Meta has opted not to sign the European Union’s voluntary code of practice for artificial intelligence, marking a notable divergence from other major tech firms that have embraced the guidelines. The EU’s initiative, designed to promote ethical AI development ahead of binding regulations, has been endorsed by companies like Google, Microsoft, and OpenAI. However, Meta’s refusal signals potential friction as the bloc moves toward stricter AI oversight under the forthcoming AI Act.
According to sources familiar with the matter, Meta expressed reservations about the code’s broad principles, arguing that its own internal AI governance frameworks already align with ethical standards. The company emphasized its commitment to responsible AI but suggested that prescriptive measures could stifle innovation. The decision has drawn criticism from EU officials, who view participation as a critical step in building trust ahead of the AI Act’s enforcement in 2025.
Analysts note that Meta’s stance may reflect a strategic hesitation to preemptively adopt standards that could evolve under the AI Act’s stringent requirements. Meanwhile, advocacy groups warn that the absence of key players from voluntary agreements could undermine efforts to establish universal norms for AI safety and transparency. As debates over AI ethics intensify, Meta’s position highlights the growing tension between tech giants and regulators shaping the future of artificial intelligence.