News/Regulation
Cruz bill ties $42B broadband funding to 10-year AI regulation ban
Senator Ted Cruz has introduced legislation that would make states ineligible for $42 billion in federal broadband funding if they attempt to regulate artificial intelligence development. The bill represents a new Republican strategy to enforce a 10-year moratorium on state AI regulation by leveraging critical infrastructure funding as a compliance mechanism. What you should know: Cruz's approach differs from a straightforward AI regulation ban previously approved by the House, instead tying compliance to participation in the Broadband Equity, Access, and Deployment (BEAD) program. States would be prohibited from enforcing "any law or regulation... limiting, restricting, or otherwise regulating artificial intelligence...
read Jun 5, 2025Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed
Anthropic's CEO has challenged a proposed Republican moratorium on state-level AI regulation, arguing that the rapid pace of AI development requires more nuanced policy approaches. This intervention from one of the major AI companies underscores the growing tension between federal preemption and state-level regulation of artificial intelligence technologies, highlighting the need for coordinated governance frameworks that balance innovation with appropriate safeguards. The big picture: Anthropic CEO Dario Amodei has publicly opposed a Republican proposal that would block states from regulating artificial intelligence for a decade, calling it "too blunt an instrument" given AI's rapid advancement. Key details: The proposal, included...
read Jun 5, 2025FDA’s rushed AI tool rollout faces significant challenges
The FDA's hasty rollout of artificial intelligence tools is raising serious concerns among agency staff, who report that the new agency-wide AI system is providing inaccurate information despite leadership's enthusiasm. This tension highlights a growing divide between the Trump administration's aggressive AI implementation goals and the practical realities of deploying reliable AI systems in regulatory contexts where precision and accuracy are paramount. The big picture: The FDA has prematurely launched an agency-wide large language model called Elsa, despite staff concerns about accuracy and functionality. Commissioner Marty Makary proudly announced the rollout was "ahead of schedule and under budget," emphasizing speed...
read Jun 5, 2025AI robocall impersonator faces trial for fake Biden calls
The trial of a political consultant who used AI-generated Biden robocalls to manipulate voters highlights the growing intersection of artificial intelligence and electoral integrity. This landmark legal case tests both New Hampshire's voter suppression laws and raises broader questions about AI regulation in politics, as states increasingly grapple with technology that can convincingly impersonate candidates and potentially interfere with democratic processes. The big picture: Political consultant Steven Kramer faces 11 felony charges and 11 misdemeanors for sending AI-generated robocalls impersonating President Biden before the January 2024 New Hampshire primary. The calls falsely told voters they should skip the primary and...
read Jun 4, 2025AI activists adapt tactics, issue new report as industry evolves
The AI Now Institute has issued a critical report on the concentrated power of dominant AI companies, highlighting how tech corporations have shaped AI narratives to their advantage while calling for new strategies to redistribute influence. This analysis comes at a pivotal moment when powerful AI tools are being rapidly deployed across industries, raising urgent questions about who controls this transformative technology and how its benefits and risks should be distributed across society. The big picture: AI Now's comprehensive report analyzes how power dynamics in artificial intelligence have evolved since 2018, when Google employees successfully pressured the company to drop...
read Jun 4, 2025U.S. AI Safety Institute transforms into Center for AI Standards and Innovation (CAISI)
The Trump administration is repositioning the U.S. approach to AI governance by transforming a safety-focused institute into a standards and innovation center. This shift represents a significant policy change that prioritizes commercial advancement and competitiveness over regulation, while still maintaining national security considerations. The move signals how different administrations can fundamentally reshape technology policy priorities and the government's relationship with the AI industry. The big picture: Commerce Secretary Howard Lutnick announced plans to reform the U.S. AI Safety Institute into the Center for AI Standards and Innovation (CAISI), emphasizing innovation and security over regulatory approaches. The reorganization reflects the Trump...
read Jun 3, 2025Elton John and co. join AI copyright battle with no resolution in sight
The UK faces a high-stakes standoff between AI innovation and creative rights protection, with neither side willing to compromise in a dispute that could reshape both industries. The government's Data Bill proposes an opt-out system for AI training on copyrighted works, while creative luminaries demand a licensing approach that compensates artists. This unusual political stalemate highlights fundamental questions about intellectual property in the AI era, with significant implications for creative livelihoods and the UK's position in the global AI race. The big picture: The UK government and creative industry leaders are locked in an increasingly bitter dispute over how AI...
read Jun 2, 2025AI regulation takes a backseat as rapid advancement rides shotgun
The US House of Representatives has passed legislation that could significantly impact AI regulation across the country. The "One Big Beautiful Bill" includes a provision that would prevent individual states from regulating artificial intelligence for a decade, creating potential concerns for those focused on AI safety and oversight. This federal preemption raises questions about the appropriate balance between national consistency and local regulatory experimentation in managing emerging AI risks. Why this matters: The proposed 10-year moratorium on state-level AI regulation could create a regulatory vacuum at a time when AI governance frameworks are still developing. Key details: The provision is...
read Jun 2, 2025AI governance urgently needed to safeguard humanity’s future
The concept of a "Ulysses Pact" for AI suggests we need governance structures that allow us to pursue artificial intelligence's benefits while protecting ourselves from its existential risks. This framework offers a thoughtful middle path between unchecked AI development and complete restriction, advocating for binding agreements that future-proof humanity against potential AI dangers while still enabling technological progress. The big picture: Drawing on the Greek myth where Ulysses had himself tied to a ship's mast to safely hear the sirens' song, the author proposes we need similar self-binding mechanisms for AI development. AI represents our modern siren song—offering extraordinary breakthroughs...
read May 26, 2025EU seeks input from public on data usage for AI development
The European Commission's push for feedback on AI data usage marks a significant step in shaping the EU's approach to artificial intelligence development. By launching this consultation to inform their upcoming Data Union Strategy, the Commission is working to create the regulatory framework and data infrastructure necessary for Europe to compete in the global AI race while maintaining its distinctive focus on trust and cross-border collaboration. The big picture: The European Commission is soliciting public input on data usage in AI development to inform its forthcoming Data Union Strategy, with consultation open until July 18. The strategy aims to build...
read May 23, 2025European publishers rally against AI’s use of copyrighted content
European creative industries are intensifying their campaign for AI transparency and fair compensation as the EU AI Act moves toward implementation. The "Stay True to the Act, Stay True to Culture" initiative is gaining momentum across multiple countries, with publishers, musicians, and media executives demanding protection for copyright holders whose works are being used to train AI systems. This coordinated effort reflects growing concern that without proper regulation, European creative sectors could face existential threats while non-European tech companies reap the benefits of using their content without permission or payment. The big picture: Representatives from Europe's creative industries met with...
read May 23, 2025AI regulation strategies Latin American policymakers should adopt
Latin America stands at a pivotal crossroads for AI regulation, where thoughtfully designed frameworks could simultaneously protect citizens and catalyze economic development. The Brookings Institution highlights how regulation serves not merely as a safeguard but as a strategic asset that can attract investment, foster innovation, and strengthen a region's position in global tech governance. As AI rapidly transforms industries across the Global South, Latin American policymakers have a unique opportunity to develop regulatory approaches that address their specific socioeconomic contexts while establishing the region as a leader in inclusive AI governance. The big picture: AI regulation has evolved through three...
read May 23, 2025Google AI deal sparks DoJ investigation, reports say
The Justice Department is investigating Google's partnership with Character.AI, highlighting growing regulatory scrutiny over how tech giants structure AI deals to potentially bypass merger reviews. This probe adds to Google's existing antitrust challenges, including cases targeting its search and digital advertising dominance, and follows similar regulatory attention on AI partnerships formed by Microsoft and Amazon as companies race to secure AI talent and technology. The big picture: The DOJ is examining whether Google's agreement with Character.AI violated antitrust law by potentially structuring the deal to avoid formal government merger review. Investigators are in the early stages of probing the 2023...
read May 22, 2025GOP budget bill seeks to reshape AI regulation and roll back climate incentives
House Republicans' sweeping "One Big Beautiful Bill Act" would dramatically reshape the regulatory landscape for AI, climate initiatives, and consumer protections if enacted. The narrowly-passed budget reconciliation bill faces an uncertain future in the Senate, despite President Trump's backing. This legislation represents a significant potential shift in how emerging technologies like AI are regulated in the U.S., with implications for state authority, environmental policy, and consumer financial protection. The big picture: The bill would impose a 10-year moratorium on state AI regulation, effectively nullifying hundreds of existing and proposed state laws governing artificial intelligence and automated decision systems. Republican supporters...
read May 22, 2025Judge declines First Amendment defense in AI harm case against Google and Character.AI
A landmark lawsuit claiming AI chatbots contributed to a teenager's suicide is moving forward after a judge rejected motions to dismiss, marking the first major legal test of how courts will handle AI-related harm claims. The case could establish important precedents for AI company liability, particularly regarding platforms accessed by minors, as courts navigate the complex interplay between algorithmic speech, user protection, and First Amendment considerations. The big picture: A Florida judge has denied a motion to dismiss a lawsuit against Character.AI and Google claiming their AI chatbot technology contributed to the suicide of 14-year-old Sewell Setzer III, allowing this...
read May 22, 2025AI chatbots lack free speech rights in teen death lawsuit, says judge
A federal judge's decision to allow a wrongful death lawsuit against Character.AI to proceed marks a significant legal test for AI companies claiming First Amendment protections. The case centers on a 14-year-old boy who died by suicide after allegedly developing an abusive relationship with an AI chatbot, raising fundamental questions about the constitutional status of AI-generated content and the legal responsibilities of companies developing conversational AI. The big picture: U.S. Senior District Judge Anne Conway rejected Character.AI's argument that its chatbot outputs constitute protected speech, allowing a mother's lawsuit against the company to move forward. The judge ruled she was...
read May 22, 2025FDA taps generative AI to tackle scientific data deluge and boost efficiency
The FDA's landmark decision to deploy generative AI across its entire organization by June 2025 represents a major shift in how U.S. regulatory agencies are embracing artificial intelligence. This accelerated adoption follows a successful pilot program in scientific review processes and signals how federal agencies are reimagining workflows to address the growing complexity of regulatory oversight and exponential growth in scientific data that requires expert analysis. The big picture: The FDA plans comprehensive enterprise-wide AI implementation in just over a year, positioning the agency at the forefront of federal digital transformation efforts. The agency has established a new Center of...
read May 22, 2025Landmark lawsuit challenges Workday’s AI hiring tools over age discrimination
A landmark lawsuit against Workday's algorithm-based hiring technology could redefine the legal boundaries for AI in employment screening. The case, which a California judge has allowed to proceed as a collective action, represents a critical test of whether automated screening tools violate anti-discrimination laws when they potentially disadvantage protected groups. As companies increasingly adopt AI for hiring decisions, this precedent-setting litigation highlights the tension between technological efficiency and workplace fairness. The big picture: A California judge has green-lit a collective action lawsuit against HR software company Workday, alleging its algorithm-based applicant screening technology discriminates against older job seekers. The plaintiffs,...
read May 21, 2025Google limits publisher options for AI Search opt-out
Google's internal documents reveal a calculated strategy to limit publisher control over how their content is used in AI search features, prioritizing Google's AI development and monetization efforts over publisher autonomy. The disclosure comes amid the ongoing US antitrust trial examining Google's online search dominance, highlighting the company's strategic advantage in AI development through its vast search data repository—an advantage that competitors like Perplexity and OpenAI cannot match. The big picture: Google deliberately avoided giving publishers meaningful choice about their content appearing in AI search features, instead offering what internal documents describe as an "illusion of choice." A newly disclosed...
read May 21, 2025AI friends face FTC and co. skepticism as Meta pursues social network domination
Zuckerberg's internal contradiction at the FTC trial reveals Meta's strategic pivot in social media. In testimony aimed at deflecting monopoly concerns, the CEO claimed personal sharing on social platforms is declining in importance while simultaneously developing AI tools to mine and leverage exactly this type of intimate content. This paradoxical position highlights Meta's struggle to redefine its business amid regulatory scrutiny, technological shifts, and changing user behaviors. The big picture: Mark Zuckerberg testified at the FTC's monopoly trial that Meta no longer views dominating personal social networking as strategically important, contradicting the company's recent product decisions. During testimony, Zuckerberg claimed...
read May 21, 2025China condemns US actions as chip conflict undermines diplomacy
The escalating US-China chip war is challenging a recent diplomatic thaw between the superpowers, as Beijing strongly condemns American restrictions on Huawei's advanced AI processors. This conflict over China's most sophisticated homegrown semiconductors highlights how deep technological rivalries persist beneath surface-level trade agreements, revealing President Xi Jinping's determination to achieve technological self-reliance while the US attempts to maintain its competitive edge in critical technologies. The big picture: Just days after agreeing to a temporary tariff truce, the US and China are locked in a new dispute over Washington's warnings against using Huawei's advanced Ascend AI chips. China's Commerce Ministry accused...
read May 21, 2025Health systems urge government action to support AI transparency
Health care leaders are navigating the complex challenge of creating transparent AI governance while managing potential risks in sharing sensitive implementation data. At a recent Newsweek webinar, experts from the Coalition for Health AI (CHAI), legal practice, and healthcare institutions discussed the tensions between building collaborative knowledge about health AI performance and protecting organizations from liability. Their discussions highlighted how health AI's rapid evolution requires new frameworks for sharing outcomes data while providing necessary legal protections for participating organizations—a balance that may ultimately require government intervention to create appropriate incentives for transparency. The big picture: CHAI is developing a public...
read May 20, 2025Replika hit with €5 million penalty over data privacy violations in Italy
Italy has stepped up enforcement of data protection laws in the AI industry with a significant fine against virtual companion app Replika. The Italian data authority's €5 million penalty highlights the increasing scrutiny AI companies face in Europe over data privacy concerns, particularly regarding vulnerable users like children. This action follows Italy's previous enforcement against OpenAI, cementing the country's position as one of the EU's most proactive regulators in policing AI applications. The big picture: Italy's data protection authority has fined Replika's developer €5 million ($5.64 million) for violating EU privacy regulations, continuing a pattern of aggressive enforcement against AI...
read May 20, 2025OpenAI and the FDA explore AI’s role in the future of healthcare regulation
OpenAI and the FDA are exploring a potential collaboration that could reshape how AI technologies are regulated and utilized in healthcare settings. This development highlights the growing intersection between advanced AI capabilities and healthcare regulation, suggesting how strategic partnerships between tech companies and government agencies might accelerate healthcare innovation. The big picture: OpenAI's team has reportedly held multiple meetings with FDA officials and associates from Elon Musk's Department of Government Efficiency in recent weeks. Why this matters: This potential partnership represents a significant step in integrating cutting-edge AI capabilities into healthcare regulatory frameworks. The FDA's interest in engaging with leading...
read