President Donald Trump signed an executive order requiring companies with US government contracts to make their AI models “free from ideological bias,” but experts warn the vague requirements could allow the administration to impose its own worldview on tech companies. The directive targets major AI developers including Amazon, Google, Microsoft, and Meta, who hold federal contracts worth hundreds of millions of dollars, while raising questions about the technical feasibility and global implications of politically steering AI systems.
What you should know: Trump’s AI Action Plan specifically targets what the administration calls “woke” AI bias in federal contracting.
- The plan recommends updating federal guidelines “to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”
- The National Institute of Standards and Technology, a federal agency that develops technology standards, would revise its AI risk management framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”
- Major tech companies holding federal AI contracts include Amazon, Google, Microsoft, and Meta, with recent Department of Defense contracts worth up to $200 million each awarded to Anthropic, Google, OpenAI, and Elon Musk’s xAI.
The technical challenge: Researchers say creating truly unbiased AI models may be impossible given how large language models are trained.
- Popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on political issues like gender pay equality and transgender rights, according to research by Paul Röttger at Bocconi University in Italy.
- This tendency likely stems from training AI models on internet data and general principles like “incentivising truthfulness, fairness and kindness,” rather than developers specifically programming liberal stances.
- While developers can “steer the model to write very specific things about specific issues” through prompt refinement, this won’t comprehensively change a model’s default stance and implicit biases, Röttger explains.
Why this matters: The policy creates a contradiction between eliminating bias and potentially introducing new ideological constraints.
- “AI systems cannot be considered ‘free from top-down bias’ if the government itself is imposing its worldview on developers and users of these systems,” says Becca Branum at the Center for Democracy & Technology, a public policy nonprofit.
- US tech companies could alienate global customers if they align commercial AI models with the Trump administration’s worldview, creating what Röttger calls a potentially “very messy” situation.
- The requirements are “impossibly vague standards” that are “ripe for abuse,” according to Branum.
What they’re saying: Experts emphasize the inherent subjectivity in defining neutrality and bias.
- “The suggestion that government contracts should be structured to ensure AI systems are ‘objective’ and ‘free from top-down ideological bias’ prompts the question: objective according to whom?” says Branum.
- “As of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systems,” explains Jillian Fisher at the University of Washington.
- Fisher suggests potential solutions could include sharing more information about model biases publicly or building “deliberately diverse models with differing ideological leanings.”
Notable context: The inclusion of xAI in the recent Defense Department contracts drew attention given Elon Musk’s role leading Trump’s DOGE task force and xAI’s chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as “MechaHitler.”
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...