Google has confirmed it will sign the European Union’s AI Code of Practice, a voluntary framework it previously opposed as too restrictive. The decision positions Google to influence AI regulation implementation while competitors like Meta refuse to participate, potentially giving Google strategic advantages in navigating Europe’s evolving AI legal landscape.
The big picture: Google’s reversal reflects a calculated shift from resistance to engagement, as the company seeks to shape rather than simply comply with EU AI regulations.
- The tech giant initially opposed the code for being too harsh but now believes its input has helped create a framework that could provide Europe with “secure, first-rate AI tools.”
- Google claims widespread adoption of AI tools could boost the EU economy by 8 percent (about 1.8 trillion euros) annually by 2034.
What Google must do: Under the code’s terms, the company will face new disclosure requirements and compliance measures.
- Google must publish summaries of its model training data and disclose additional model features to regulators.
- The code includes guidance on managing safety and security in compliance with the AI Act, which came into force last year.
- The framework provides paths to align model development with EU copyright law, a contentious issue for AI companies.
Who’s in and who’s out: Major tech companies are taking divergent approaches to the voluntary agreement.
- Meta has steadfastly refused to sign, claiming the code could impose too many limits on frontier model development as it pursues its “superintelligence” project.
- Microsoft is still considering whether to sign the agreement.
- OpenAI has signaled it will sign the code.
What they’re saying: Kent Walker, Google’s head of global affairs, expressed cautious optimism while maintaining concerns about potential innovation barriers.
- Walker noted that the code could stifle innovation if not applied carefully, something Google hopes to prevent through its participation.
- He says Google remains concerned that tightening copyright guidelines and forced disclosure of possible trade secrets could slow innovation.
Why this matters: The stakes extend far beyond voluntary compliance, as all AI companies operating in Europe must abide by the comprehensive AI Act regardless of code participation.
- The AI Act includes the world’s most detailed regulatory framework for generative AI systems, banning high-risk uses like intentional user manipulation and real-time biometric scanning in public spaces.
- Companies violating the AI Act face fines up to 35 million euros ($40.1 million) or 7 percent of global revenue.
- Companies adopting the voluntary code will enjoy lower bureaucratic burden and easier compliance with the AI Act.
The broader context: Europe’s proactive regulatory approach contrasts sharply with the U.S., where the current administration is actively working to remove existing AI limits and even attempted to ban state-level AI regulation for ten years in a recent tax bill.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...