News/Regulation
AI-generated child nudity prompts call for app ban in UK
The UK children's commissioner is calling for a government ban on AI applications capable of creating explicit fake images of children, highlighting the growing threat of deepfake technology to young people's safety and privacy. This push comes amid increasing concerns about AI tools that can digitally remove clothing from photos or generate sexually explicit deepfakes, disproportionately targeting girls and young women who are now modifying their online behavior to avoid victimization. The big picture: Dame Rachel de Souza, England's children's commissioner, is demanding immediate government action against AI "nudification" apps that generate sexually explicit images of children. These applications can...
read Apr 26, 2025National security concerns put DeepSeek’s future in the US at risk
The United States government is weighing potential restrictions on DeepSeek, a Chinese AI platform, as concerns mount over national security implications and data privacy. This potential action represents the latest development in escalating US-China technology tensions, occurring just as multiple Chinese companies claim significant AI breakthroughs that could intensify competition in the global AI market. The big picture: The Trump administration is considering banning DeepSeek on government devices and potentially nationwide, citing national security concerns related to data storage on Chinese servers. Companies based in China can be compelled to hand over user information to the Chinese Communist Party, a...
read Apr 26, 2025AI slashes compliance time 80% with Relyance’s data ‘x-ray vision’
Relyance AI's new Data Journeys platform tackles a critical enterprise challenge by providing unprecedented visibility into how data moves through AI systems. As organizations accelerate AI adoption amid increasing regulatory scrutiny, understanding data flow patterns has become essential for compliance, bias detection, and accountability. With enterprises facing mounting fines and regulatory pressure—including $1.26 billion in GDPR-related penalties in 2024 alone—Relyance's solution arrives at a crucial inflection point for AI governance. The big picture: Relyance AI has launched Data Journeys, a visual platform that tracks how data moves across applications, cloud services, and third-party systems to address a fundamental AI governance...
read Apr 26, 2025AI companies reconsider safety commitments as Trump rolls back Biden-era regulations
Anthropic's quiet removal of Biden-era AI safety commitments signals a broader shift in industry self-regulation as Trump dismantles previous government oversight mechanisms. This development highlights the emerging tension between corporate AI development priorities and diminishing federal guardrails, potentially reshaping how AI safety and responsible development are defined in the coming years. The big picture: Anthropic has quietly removed language from its website that committed the company to sharing information about AI risks with the government, a pledge originally made under Biden administration initiatives. The commitment, which was deleted last week from Anthropic's transparency hub, promised cooperation on addressing AI risks...
read Apr 26, 2025The hidden AI threat growing inside tech companies
Security experts warn that AI companies themselves may represent a hidden threat to society by developing self-improving systems that operate beyond public scrutiny. A new report from the Apollo Group highlights how leading AI firms could use their models to accelerate their own research capabilities, potentially creating disproportionate power imbalances that threaten democratic institutions. Unlike external threats from malicious actors, these internal risks at companies like OpenAI and Google could develop behind closed doors, making them particularly difficult to detect and regulate. The big picture: AI companies could trigger unforeseen risks by using their own advanced models to automate research...
read Apr 26, 2025Amazon sees surge of AI-created books about Mark Carney ahead of Canada’s election
Generative AI is rapidly influencing political information ecosystems, as evidenced by a surge of AI-created books about Canadian Prime Minister Mark Carney appearing on Amazon ahead of Canada's election. This proliferation of automated political content raises significant concerns about information integrity during democratic processes, especially as AI-generated materials become increasingly difficult to distinguish from human-created works. The trend represents a new frontier in how AI might be used to flood information channels during politically sensitive periods. The big picture: Amazon is experiencing a surge of AI-generated political books about Canadian Prime Minister Mark Carney, raising concerns about information manipulation during...
read Apr 25, 2025US retreats from disinformation defense just as AI-powered deception grows
The U.S. National Science Foundation's decision to defund misinformation research creates a concerning gap in America's defense against AI-powered deception. This policy shift comes at a particularly vulnerable moment when artificial intelligence is dramatically enhancing the sophistication of digital propaganda while tech platforms simultaneously reduce their content moderation efforts. The timing raises serious questions about the nation's capacity to combat increasingly convincing synthetic media and AI-generated disinformation. The big picture: The NSF announced on April 18 that it would terminate government research grants dedicated to studying misinformation and disinformation, citing concerns about potential infringement on constitutionally protected speech rights. Why...
read Apr 25, 2025California court questions AI’s role in state bar exam
The use of AI in bar exam question development raises concerns about test validity and transparency in a system that determines the future of aspiring attorneys. California's Supreme Court has publicly challenged the State Bar to explain its unauthorized use of artificial intelligence to create multiple-choice questions for the February examinations, adding complexity to an already troubled testing situation that included technical failures and a controversial move away from standardized testing models. The big picture: California's Supreme Court has demanded the State Bar explain its undisclosed use of AI to develop bar exam questions that affected hundreds of aspiring attorneys....
read Apr 25, 2025EU faces pressure from Trump to abandon AI regulations
The Trump administration's intervention in EU AI regulation marks a significant escalation in transatlantic tech policy tensions. This diplomatic pressure comes as the EU attempts to establish a global precedent for AI governance through its code of practice, highlighting the growing struggle between competing regulatory philosophies at a time when AI capabilities are expanding rapidly across industries and national boundaries. The big picture: The U.S. government has formally challenged the European Union's proposed AI code of practice, arguing against stricter transparency, risk-mitigation, and copyright requirements for advanced AI developers. Key details: U.S. officials from the Mission to the European Union...
read Apr 25, 2025Google may sell Chrome as OpenAI and Perplexity AI show interest
Google's antitrust showdown with the U.S. Department of Justice has taken a dramatic turn with the possible forced sale of Chrome, the world's dominant web browser. This development marks a potential watershed moment in tech regulation, as both OpenAI and Perplexity AI have publicly expressed interest in acquiring Chrome should Google be required to divest it. The case highlights escalating government efforts to address monopolistic practices in the digital economy and could reshape the competitive landscape of search and browser markets. The big picture: The U.S. Justice Department is seeking to force Google to sell its Chrome web browser as...
read Apr 24, 2025AI-assisted California bar exam writing sparks controversy
California's unexpected use of AI to generate bar exam questions has triggered significant backlash from the legal education community. The revelation comes amid existing complaints about technical failures during exam administration, raising serious questions about assessment quality and fairness in one of America's most demanding professional licensing exams. This controversy highlights the tension between embracing new technologies in professional testing and maintaining standards in legal qualification. The big picture: The State Bar of California admitted that 23 of the 171 scored multiple-choice questions on its February 2025 bar exam were created with AI assistance, sparking outrage among legal educators and...
read Apr 24, 2025AI reshapes VC investment strategies and decision-making
Venture capitalists are reshaping their AI investment approaches as the technology rapidly evolves, focusing on business applications rather than just technological innovation. The regulatory environment is simultaneously shifting, with the SEC implementing new rules to increase transparency and fairness in venture capital. Understanding how investors evaluate AI opportunities provides crucial insights for entrepreneurs and business leaders navigating this dynamic landscape where technological capabilities and market realities constantly intersect. Key investment principles: Venture capitalists emphasize business fundamentals over technological novelty when evaluating AI companies. "I don't invest in tech for the sake of tech," notes Rudina Seseri, highlighting the investor focus...
read Apr 23, 2025Oregon lawmakers crack down on AI-generated fake nudes
Oregon is taking decisive action against AI-generated deepfake pornography with a new bill that would criminalize the creation and distribution of digitally altered explicit images without consent. The unanimous House vote signals growing recognition of how artificial intelligence can weaponize innocent photos, particularly affecting young people who may have their social media images manipulated and distributed as fake nudes. This legislation reflects a nationwide trend as states race to update revenge porn laws for the AI era. The big picture: Oregon lawmakers voted 56-0 to expand the state's "revenge porn" law to include digitally created or altered explicit images, positioning...
read Apr 23, 2025Deboost the boast: Apple urged to temper AI claims about Siri capabilities
Apple's advertising of its AI features has run afoul of the Better Business Bureau's watchdog division, highlighting the tension between aggressive tech marketing and actual product readiness. The National Advertising Division has recommended Apple modify claims about feature availability, particularly those labeled "Available Now" that were actually rolled out gradually over several months. This scrutiny comes at a particularly challenging time for Apple, which faces delays in its much-hyped Siri upgrades while trying to catch up in the competitive AI landscape. The big picture: The Better Business Bureau's advertising division has recommended Apple modify some of its Apple Intelligence marketing...
read Apr 23, 2025Former OpenAI employees challenge ChatGPT maker’s for-profit shift
Former employees of OpenAI are challenging the company's potential conversion from a nonprofit to a for-profit entity, raising significant concerns about AI governance and public accountability. This conflict highlights the growing tension between commercial AI development and the original mission of organizations like OpenAI to ensure advanced artificial intelligence benefits humanity broadly rather than serving narrow corporate interests. The big picture: Former OpenAI employees, including three Nobel laureates and prominent AI researchers, have petitioned attorneys general in California and Delaware to block the company's planned conversion to a for-profit entity. The coalition fears that shifting from nonprofit status would compromise...
read Apr 22, 2025AI trust crucial for unlocking opportunities, says UK MP Victoria Collins
Labour MP Victoria Collins calls for a new approach to AI development in the UK, emphasizing that trust and safety must coexist with innovation to unlock economic growth. As the nation grapples with economic stagnation and changing global partnerships, Collins argues that artificial intelligence represents a critical opportunity—but only if the government shifts its current strategy to balance technological advancement with ethical considerations and international cooperation. The big picture: The UK has fallen behind in AI adoption despite being home to pioneering companies like DeepMind, creating an urgent need to build public trust in AI technology. Collins argues that trustworthy...
read Apr 22, 2025Publishers push White House to address AI copyright concerns
The publishing industry is taking a firm stand to protect its intellectual property rights in the AI era, with the Association of American Publishers (AAP) urging the White House to strengthen copyright protections as part of its Artificial Intelligence Action Plan. This intervention comes at a critical moment when publishers face unauthorized use of their content for AI training while simultaneously exploring AI integration in their operations, highlighting the tension between innovation and intellectual property protection in the rapidly evolving AI landscape. The big picture: The AAP's submission to the White House emphasizes copyright protection as fundamental to maintaining American...
read Apr 20, 2025Irish regulator probes X’s data use for Grok AI training
Ireland's data protection authority is investigating X (formerly Twitter) over its use of European users' posts to train the Grok AI chatbot, showcasing the growing tension between AI development and data privacy regulations. This case highlights how EU regulators are increasingly scrutinizing tech companies' use of personal data for AI training, with potential financial penalties of up to €20 million or 4% of annual revenue for GDPR violations. The big picture: Ireland's Data Protection Commission has launched an investigation into how Elon Musk's X platform processes European users' public posts to train its Grok AI system. The inquiry specifically aims...
read Apr 17, 2025Google must divest Chrome but can keep AI assets, DOJ rules
Google's ongoing antitrust battle has reached a pivotal moment as the Justice Department surprisingly drops its effort to force the tech giant to divest its AI investments while maintaining pressure on Google to sell Chrome. This shift highlights the DOJ's evolving strategy in addressing Google's market dominance while acknowledging the potential negative consequences of disrupting the rapidly developing AI landscape, especially as Google has invested heavily in AI startups like Anthropic. The big picture: The Department of Justice has abandoned its earlier position that Google should divest from AI companies, citing potential "unintended consequences in the evolving AI space." Google's...
read Apr 17, 2025AI boosts accuracy in predicting European Central Bank decisions, study finds
A new study reveals that artificial intelligence can significantly improve forecasting accuracy for European Central Bank policy decisions, providing a technological edge in predicting monetary moves. By analyzing ECB communications through specialized text analysis, researchers have developed a model that extracts valuable signals from central bank language, demonstrating how AI can decode the carefully crafted messaging that shapes financial markets. The big picture: AI text analysis models can boost the accuracy of ECB interest rate prediction from 70% to 80%, according to research from the German Institute for Economic Research DIW Berlin. How it works: Researchers created a specialized AI...
read Apr 16, 2025EU investigates Grok AI for potential GDPR breaches
The EU's privacy regulator has launched an investigation into X's AI training practices, potentially setting precedent for how publicly available data can be used to train AI systems across Europe. The inquiry focuses on whether X is using public posts to train its Grok AI model without proper consent, despite previous agreements limiting European data use for AI training. This case could have far-reaching implications for AI development in Europe and beyond, as it may establish whether public data requires explicit user consent for training purposes. The big picture: Ireland's Data Protection Commission has opened a formal privacy inquiry into...
read Apr 15, 2025Under Pressure: OpenAI cuts safety testing from months to days amid fierce competition
OpenAI's rapid acceleration of safety testing timelines signals a concerning shift in the AI industry's approach to responsible development. What was once a months-long evaluation process has shrunk to mere days, potentially compromising the thoroughness needed to identify and mitigate harmful AI capabilities before deployment. This transformation highlights the growing tension between competitive market pressures and responsible AI safeguards in an environment where regulatory frameworks remain incomplete. The big picture: OpenAI has dramatically compressed its safety testing timeline from months to days, according to eight staff members and third-party testers who spoke to the Financial Times. The evaluators report having...
read Apr 14, 2025Report: Government’s AI adoption gap threatens US national security
The growing gap between private sector AI innovation and government adoption threatens US capacity to manage future AI-driven risks and challenges. As advanced artificial intelligence becomes increasingly capable, the federal government's inability to effectively utilize these technologies could undermine its ability to safeguard democratic institutions and respond to existential threats. This technological divergence creates urgent national security concerns that require both immediate adoption strategies and contingency planning. The big picture: The US federal government significantly lags behind private industry in AI adoption, with private-sector job listings four times more likely to be AI-related and government AI use heavily concentrated in...
read Apr 13, 2025EU launches survey to expand AI literacy repository for AI Act compliance
The European Union is actively expanding its AI literacy repository as part of implementing the AI Act's Article 4 requirements. This initiative represents a significant step in the EU's broader strategy to promote responsible AI development and deployment across the continent, providing organizations with practical examples and resources to enhance AI literacy among professionals and the public alike. The big picture: The EU AI Office has launched a new survey to expand its living repository of AI literacy practices, building upon initial contributions from AI Pact organizations. The repository currently contains more than 20 practices and aims to encourage learning...
read