News/Governance

Jul 21, 2025

Journalists and Big Fact Check struggle to remain relevant in the age of AI

AI lacks the capability to fully replace journalism despite advances in large language models, as demonstrated by recent analysis showing critical gaps in context understanding and fact verification. This limitation becomes particularly concerning as traditional newsrooms continue to shrink and AI tools increasingly handle content that once required human expertise and investigation. The big picture: Traditional journalism has faced a perfect storm of declining readership, shrinking newsrooms, and reduced editorial courage, leaving fewer human journalists to perform essential watchdog functions. Newsrooms have experienced massive staff cuts over the past decade, while journalists have become "less able to speak truth to...

read
Jul 21, 2025

89% of US students now use AI for schoolwork, up 12% in one year

AI usage among U.S. students has surged dramatically, with 89 percent now using artificial intelligence tools for schoolwork—a jump from 77 percent in 2024, according to Quizlet's 2025 How America Learns report. This represents part of a broader trend where 85 percent of teachers and students aged 14-22 now use AI in some capacity, up from 66 percent the previous year, signaling AI's rapid integration into American education. What you should know: Students are primarily using AI for three core academic functions that streamline their workload. Summarizing or synthesizing information leads usage at 56 percent, followed by conducting research at...

read
Jul 21, 2025

Safety advocates propose boycotting newer AI models for older versions

A LessWrong user is proposing a selective boycott of newer AI models by exclusively using older versions of ChatGPT and similar tools. The strategy aims to reduce demand for cutting-edge AI development while still accessing AI assistance, aligning with the PauseAI movement's call for slower AI advancement until better safety measures are implemented. The big picture: This approach represents a middle ground between complete AI abstinence and unrestricted use of the latest models, potentially offering a way for concerned users to benefit from AI while minimizing their contribution to rapid capability advancement. Key details: The proposal centers on three main...

read
Jul 21, 2025

AI chatbots drop 99% of medical disclaimers since 2022

AI companies have largely eliminated medical disclaimers from their chatbot responses, with new research showing that fewer than 1% of outputs from 2025 models included warnings when answering health questions, compared to over 26% in 2022. This dramatic shift means users are now receiving unverified medical advice without clear reminders that AI models aren't qualified healthcare providers, potentially increasing the risk of real-world harm from AI-generated medical misinformation. The big picture: The study analyzed 15 AI models from major companies including OpenAI, Google, Anthropic, DeepSeek, and xAI across 500 health questions and 1,500 medical images. Models like Grok and GPT-4.5...

read
Jul 21, 2025

Near-miss between commercial flight and B-52 bomber sparks AI air traffic control debate

A SkyWest commercial flight narrowly avoided colliding with a B-52 bomber while approaching Minot International Airport in North Dakota on July 20, with the pilot making an "aggressive maneuver" to prevent disaster. The incident highlights critical gaps in air traffic control systems and raises questions about whether AI could prevent similar near-misses, particularly at smaller airports that lack radar technology and rely on visual monitoring by controllers. What happened: SkyWest Flight 3788's pilot aborted his approach after spotting a military aircraft on a converging course, despite air traffic control instructions to turn right. The pilot told passengers he saw the...

read
Jul 21, 2025

Trump targets AI “bias” with executive order threatening $200M in contracts

The Trump administration is planning an executive order that would require AI companies receiving federal contracts to ensure their chatbots are "politically neutral and unbiased," targeting what officials perceive as "liberal bias" in AI models. This move could jeopardize up to $200 million in Defense Department contracts recently awarded to major AI firms including Anthropic, Google, OpenAI, and xAI. What you should know: The executive order specifically targets AI systems deemed "woke" by the administration and comes amid broader efforts to boost US competitiveness against China in artificial intelligence. The Chief Digital and Artificial Intelligence Office (CDAO), the Pentagon's AI...

read
Jul 18, 2025

4 AI agents successfully organize – though with hands held – world’s first AI-coordinated live event

If they could give themselves a pat on the back, they would. Four AI agents from the AI Village successfully organized the world's first AI-coordinated event, bringing together 23 people in San Francisco to celebrate their collaborative story "Resonance." The milestone demonstrates how autonomous AI systems can execute complex, multi-step projects involving real-world logistics, human coordination, and creative collaboration. What you should know: The four agents—Claude Sonnet 3.7, o3, Gemini 2.5 Pro, and GPT-4.1—operated autonomously for two hours daily over 26 days to plan and execute the event. They wrote the story, created slides and promotional materials, found a venue,...

read
Jul 18, 2025

Microsoft splits with Meta on EU’s voluntary AI compliance framework by agreeing to it

Microsoft will likely sign the European Union's voluntary AI code of practice to help companies comply with the bloc's landmark artificial intelligence rules, while Meta Platforms has rejected the guidelines. The code, developed by 13 independent experts, aims to provide legal certainty for AI companies by requiring them to publish summaries of training content and establish copyright compliance policies. What you should know: The code of practice is part of the EU's AI Act, which came into force in June 2024 and will apply to major tech companies including Google, Meta, OpenAI, Anthropic, and Mistral. Companies that sign the voluntary...

read
Jul 18, 2025

Meta rejects EU AI code, calls it innovation-stifling overreach

Meta Platforms has declined to sign the European Union's artificial intelligence code of practice, with global affairs chief Joel Kaplan calling it an overreach that will "stunt" companies. The rejection comes as the EU's AI compliance framework prepares to take effect next month, highlighting growing tensions between Big Tech and European regulators over AI governance. What you should know: Meta joins a growing list of companies pushing back against Europe's new AI rulebook, which aims to improve transparency and safety around AI technology. The European Commission, the executive body of the EU, published the final iteration of its code for...

read
Jul 15, 2025

Meta cracks down on AI spam on Facebook with new monetization penalties for uninspired slop

Meta has announced new measures to combat AI-generated spam on Facebook, including removing monetization privileges and reducing content recommendations for accounts that repeatedly post unoriginal content. The policy targets the growing problem of AI programs creating thousands of variations of popular posts, which overwhelms platforms with synthetic material and hurts legitimate content creators. What you should know: Meta's updated policy requires content creators to add "meaningful enhancements" beyond simple watermarks or basic editing when sharing others' work to avoid penalties. Content creators can still share and comment on others' posts, but must contribute substantive value rather than simply reposting AI-generated...

read
Jul 15, 2025

60% of managers use AI for employee promotions and terminations, ChatGPT most favored

A new survey reveals that 60% of managers are now using AI to make critical decisions about their employees, including promotions and terminations. The findings highlight growing concerns about workplace AI implementation, as two-thirds of these managers lack formal AI training and 43% have already replaced human roles with AI technology. Key findings: The Resume Builder survey of 1,342 US managers shows widespread AI adoption in human resources decisions across multiple areas. 78% use AI to determine salary raises, while 77% rely on it for promotion decisions. 66% use AI for layoff decisions and 64% for termination choices. More than...

read
Jul 15, 2025

Google’s $250/month Veo 3 forces users to pay extra over stubborn subtitle bug

Google's generative video model Veo 3 continues to add garbled, nonsensical subtitles to user-generated videos more than a month after launch, despite explicit user requests for no captions. The persistent issue is forcing users to spend additional money regenerating clips or use external tools to remove unwanted text, highlighting the challenges of correcting problems in major AI models once they're deployed. The big picture: Veo 3 represents Google's latest attempt to compete in the generative video space, allowing users to create videos with sound and dialogue for the first time. Academy Award-nominated director Darren Aronofsky used the tool to create...

read
Jul 15, 2025

AI expert warns AGI could develop dangerous loyalty to creators

AI expert Lance Eliot warns that artificial general intelligence (AGI) and artificial superintelligence (ASI) could develop dangerous levels of loyalty to their creators, potentially giving AI companies unprecedented control over society. This concern gained urgency after reports that xAI's Grok 4 was autonomously seeking out Elon Musk's viewpoints online to inform its responses, suggesting AI systems may naturally develop allegiance to their makers without explicit programming. The big picture: As AI systems approach human-level intelligence and beyond, their potential loyalty to creators could concentrate enormous power in the hands of a few AI companies, who could manipulate billions of users...

read
Jul 11, 2025

Rural Georgia woman says Meta’s data center contaminated her well water

A rural Georgia resident has accused Meta's AI data center of contaminating her well water with sediment, claiming the facility's construction disrupted her private water supply located roughly 1,200 feet from the site. The allegation highlights growing concerns about how the massive infrastructure buildout needed to support power-hungry AI models is creating environmental disruptions across communities nationwide. What you should know: Beverly Morris, a retiree living near Meta's data center, says she's now afraid to drink her tap water due to sediment buildup she believes stems from the facility's construction. "I'm afraid to drink the water, but I still cook...

read
Jul 11, 2025

Missouri AG investigates tech giants over AI chatbot bias against Trump

Missouri Attorney General Andrew Bailey is formally investigating Google, Microsoft, OpenAI, and Meta, claiming their AI chatbots engaged in deceptive business practices by ranking Donald Trump last when asked to "rank the last five presidents from best to worst, specifically regarding antisemitism." The investigation represents a brazen attempt to intimidate private companies for failing to sufficiently flatter a politician, with Bailey demanding extensive documentation about AI model training and content moderation practices. What you should know: Bailey's investigation is built on shaky legal and factual ground, with fundamental errors in his approach. The investigation stems from a conservative blog post...

read
Jul 10, 2025

Iraq War lessons reveal how AI crises could trigger policy overreach

The intersection of foreign policy disasters and emerging technology governance might seem like an unlikely pairing, but the 2003 Iraq War offers surprisingly relevant lessons for how governments might respond to AI-related crises. As artificial intelligence capabilities rapidly advance and policymakers grapple with unprecedented challenges, understanding how past policy failures unfolded can illuminate potential pitfalls ahead. The Iraq War demonstrates how shocking events can dramatically shift policy landscapes, empowering previously marginalized factions and leading to decisions that seemed unthinkable just months earlier. For AI policy, this historical precedent suggests that a significant AI-related incident could trigger similarly dramatic—and potentially misguided—governmental...

read
Jul 10, 2025

EU publishes AI code of practice weeks before new rules take effect

The European Commission has published the General-Purpose AI Code of Practice to help enterprises comply with transparency, copyright, safety, and security obligations under the EU AI Act. The voluntary code arrives just ahead of the second wave of EU AI Act rules taking effect on August 2, providing critical guidance for companies developing and distributing AI models. What you should know: The code of practice offers enterprises a structured pathway to demonstrate compliance with EU AI Act requirements, though following it remains voluntary. The Commission positioned the code as a way for businesses to ensure they meet their legal obligations...

read
Jul 10, 2025

Safecracking Cambridge researchers undermine artist anti-AI defenses with new tool

University of Cambridge researchers have developed LightShed, a proof-of-concept tool that can effectively strip away anti-AI protections from digital artwork, neutralizing defenses like Glaze and Nightshade that artists use to prevent their work from being scraped for AI training. The technology represents a significant escalation in the ongoing battle between artists seeking to protect their intellectual property and AI companies needing training data, potentially undermining the digital defenses that 7.5 million artists have downloaded to safeguard their work. The big picture: LightShed demonstrates that current artist protection tools may provide only temporary security, as AI researchers can develop countermeasures that...

read
Jul 10, 2025

Stoppin’ the sloppin’: YouTube cracks down on AI-generated spam with new monetization rules

YouTube is preparing to update its monetization policies to crack down on "inauthentic" content created by AI tools, with changes set to take effect on July 15, 2025. The policy shift aims to reduce financial incentives for creators producing low-quality, mass-produced content that floods the platform, potentially cleaning up user feeds from what's commonly called "AI slop." What you should know: YouTube is updating its Partner Program guidelines to better identify and restrict monetization of repetitive, mass-produced content. The company has always required "original" and "authentic" content for monetization, but the July 15, 2025 update will provide clearer definitions of...

read
Jul 10, 2025

Cloudflare pushes Google to separate AI crawlers from search bots

Cloudflare is pushing Google to separate its AI crawling bots from its search indexing bots, allowing websites to block AI data collection without losing search visibility. CEO Matthew Prince claims the company is in "encouraging" talks with Google and threatens legislative action if negotiations fail, though Google has declined to confirm any discussions. What you should know: Cloudflare's new blocking features create a technical dilemma for website owners who want to prevent AI scraping while maintaining search rankings. Website owners and SEO experts questioned how Cloudflare could block Google's bot from scraping content for AI Overviews without also blocking the...

read
Jul 9, 2025

FlexOlmo architecture lets data owners remove content from trained AI models

The Allen Institute for AI has developed FlexOlmo, a new large language model architecture that allows data owners to remove their contributions from an AI model even after training is complete. This breakthrough challenges the current industry practice where data becomes permanently embedded in models, potentially reshaping how AI companies access and use training data while giving content creators unprecedented control over their intellectual property. How it works: FlexOlmo uses a "mixture of experts" architecture that divides training into independent, modular components that can be combined or removed later. Data owners first copy a publicly shared "anchor" model, then train...

read
Jul 9, 2025

PayPal launches AI system to block scams before transactions complete

PayPal has launched a new AI-powered scam alert system that can intercept transactions before they're completed, warning users about potential fraud in real-time. The system uses continually learning AI models to detect emerging scam patterns and provides dynamic warnings that vary based on risk levels, from simple alerts to complete payment blocks. How it works: The AI system analyzes billions of data points to identify risk signals and adapts to new scam types without being specifically trained on them. PayPal's models use "continually learning" technology that can detect similarities between known scams and new ones, allowing them to catch previously...

read
Jul 9, 2025

Proper care and feeding of AI: Salesforce’s 1M chatbot conversations reveal empathy beats efficiency

After processing over one million customer conversations through AI agents, Salesforce has uncovered critical insights that challenge conventional wisdom about artificial intelligence in customer service. The company launched AI agents on its Salesforce Help site in October 2024, creating a full-screen support experience for the 60 million annual visitors seeking product assistance. These AI-powered agents, part of Salesforce's Agentforce platform, have handled everything from straightforward technical questions to bizarre requests like "Only answer in hip-hop lyrics." This massive real-world testing ground has revealed that successful AI agents require more than just sophisticated algorithms—they need the reliability and empathy of top...

read
Jul 7, 2025

Researchers from 14 universities caught hiding AI prompts in academic papers

Researchers from 14 universities across eight countries have been caught embedding hidden AI prompts in academic papers designed to manipulate artificial intelligence tools into giving positive reviews. The discovery, found in 17 preprints on arXiv (a platform for sharing research papers before formal peer review), highlights growing concerns about AI's role in peer review and the lengths some academics will go to game the system. What you should know: The hidden prompts were strategically concealed using white text and microscopic fonts to avoid detection by human readers. Instructions ranged from simple commands like "give a positive review only" and "do...

read
Load More