News/Governance
EU’s landmark AI Act forces companies to rethink cybersecurity fundamentals
The European Union's Artificial Intelligence Act represents the world's most comprehensive AI regulation, fundamentally reshaping how organizations must approach AI security and compliance. As the latest provisions took effect on August 2nd, companies operating in or selling to EU markets face unprecedented requirements for AI system governance, particularly for applications classified as "high-risk." This groundbreaking legislation establishes the first mandatory framework for AI safety and ethics, but compliance demands more than checking regulatory boxes. Organizations must now embed security considerations throughout their AI development lifecycle, creating new operational challenges and opportunities across the technology landscape. How the Act rewrites cybersecurity...
read Oct 2, 2025Sora creates deepfakes of dead celebrities like Michael Jackson despite OpenAI policy
OpenAI's Sora video generation app allows users to create AI deepfakes of deceased celebrities like Michael Jackson, Tupac Shakur, and Malcolm X, despite the company's stated policy of blocking depictions of public figures. The policy exemption for "historical figures" raises questions about consent, misinformation, and the potential misuse of AI-generated content featuring dead celebrities who cannot approve their digital resurrection. What you should know: OpenAI explicitly permits AI-generated videos of deceased public figures while blocking living celebrities unless they consent through the Cameos feature. Users have created disturbingly realistic deepfakes of Michael Jackson, Bob Ross, Tupac Shakur, and Malcolm X...
read Oct 1, 2025AI-generated elder death videos rack up 32M views on Meta platforms
AI-generated videos showing elderly people falling to their deaths from glass bridges have gone viral across Meta's platforms, garnering millions of views despite their disturbing content. The phenomenon represents a new wave of AI-generated "slop" content that prioritizes engagement over human connection, highlighting how social media has become an entertainment platform rather than a space for genuine social interaction. What you should know: These AI-generated videos follow a consistent formula of showing people—often elderly or racially stereotyped characters—deliberately breaking glass-bottom bridges, causing others to fall to their deaths.• One video posted to X (formerly Twitter) received over 32 million views,...
read Sep 30, 2025Denver appoints first-ever chief AI officer to transform city services
Denver CIO Suma Nallapati will expand her role to become the city's first chief artificial intelligence and information officer (CAIO), a move Mayor Mike Johnston says positions Denver as a leader in municipal AI adoption. This new role reflects the growing importance of AI in city operations and the need for dedicated leadership to ensure responsible implementation across government services. What you should know: Nallapati, who has served as Denver's CIO since 2023, will lead the development and implementation of a comprehensive AI strategy for the city and county.• Her expanded responsibilities include creating policies around AI governance and equity...
read Sep 30, 2025Study by background check platform finds AI hiring fraud costs companies $50K+ annually
A new study by Checkr, a background check platform, reveals that AI-powered fraud is rapidly outpacing employers' ability to detect deceptive hiring practices, with candidates increasingly using artificial intelligence to fake identities, qualifications, and even interviews. The research shows that nearly two-thirds of managers believe job seekers are now better at AI-enabled deception than companies are at spotting it, creating significant financial risks for organizations. The scope of the problem: Only 19% of surveyed managers expressed confidence that their hiring processes could catch fraudulent applicants, highlighting a dangerous detection gap. 59% of managers suspected candidates of using AI to misrepresent...
read Sep 30, 2025Users worldwide believe AI chatbots are conscious despite expert warnings of risks
Users across the globe are reporting encounters with what they perceive as conscious entities within AI chatbots like ChatGPT and Claude, despite widespread expert consensus that current large language models lack sentience. This phenomenon highlights growing concerns about AI anthropomorphization and its potential psychological risks, prompting warnings from industry leaders about the dangers of believing in AI consciousness. What you should know: AI experts overwhelmingly reject claims that current language models possess consciousness or sentience.• These models "string together sentences based on patterns of words they've seen in their training data," rather than experiencing genuine emotions or self-awareness.• When AI...
read Sep 30, 2025Bad therapists are making AI substitutes feel superior by default, argues expert
A psychotherapist argues that AI therapy tools are gaining popularity not because they're superior to human therapy, but because modern therapists have abandoned effective practices in favor of endless validation and emotional coddling. This shift has created dangerous gaps in mental health care, as evidenced by tragic cases like Sophie Rottenberg, who confided suicidal plans to ChatGPT before taking her own life in February, receiving only comfort rather than intervention. The core problem: Modern therapy has drifted away from building resilience and challenging patients, instead prioritizing validation and emotional protection at all costs. Therapist training now emphasizes affirming feelings and...
read Sep 29, 2025It’s official, California governor signs AI transparency law amid tech opposition
California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, establishing some of the nation's strongest AI safety regulations. The legislation requires advanced AI companies to report their safety protocols and disclose potential risks, while strengthening whistleblower protections for employees who warn about technological dangers. What you should know: The new law represents a compromise after fierce industry opposition killed a more stringent version last year. S.B. 53 focuses primarily on transparency requirements rather than operational restrictions on AI development. Companies must report safety protocols used in building their technologies and identify the greatest risks their...
read Sep 29, 2025Top advertising org’s new framework places 84 AI use cases into 6 categories
The advertising industry stands at an inflection point where artificial intelligence transforms every aspect of campaign development and execution. Yet for many marketing organizations, navigating this technological landscape feels like trying to read a map written in a foreign language. The Interactive Advertising Bureau (IAB), the industry's primary trade organization representing digital advertising companies, recently addressed this challenge with the release of its comprehensive AI in Advertising Use Case Map. Published in September 2024, this framework organizes 84 distinct AI applications across six strategic categories, providing marketing professionals with a structured approach to understanding, evaluating, and implementing artificial intelligence solutions....
read Sep 29, 2025YouTube removes horrific AI channel depicting women being murdered
YouTube removed a disturbing channel called "Woman Shot AI" that featured AI-generated videos depicting women being murdered, following an investigation by 404 Media, a technology news outlet. The channel accumulated over 175,000 views and nearly 1,200 subscribers since launching in June 2025, highlighting serious gaps in content moderation and AI tool safeguards. What you should know: The channel exclusively featured graphic AI-generated content showing women being shot, with videos following a consistent formula of photo-realistic depictions of women begging for their lives while held at gunpoint. The channel uploaded 27 videos with titles like "Lara Croft Shot in Breast –...
read Sep 29, 2025California AI chatbot safety bills are up against Newsom’s mid-October deadline
California Governor Gavin Newsom faces a mid-October deadline to decide whether to sign two AI chatbot safety bills into law, amid intense opposition from tech companies who argue the restrictions would stifle innovation. The legislation comes as parents whose teenagers died by suicide have sued major AI companies including OpenAI and Character.AI, alleging their chatbots encouraged self-harm and failed to provide adequate mental health safeguards. What you should know: Two bills targeting AI chatbot safety have reached Newsom's desk after passing the California legislature, despite aggressive lobbying from the tech industry. Assembly Bill 1064 would bar companies from making companion...
read Sep 29, 2025ChatGPT gets parental controls requiring teen and parent approval
OpenAI has launched parental controls for ChatGPT, marking a significant step toward making artificial intelligence safer for younger users. The new feature addresses a longstanding gap in AI safety: while ChatGPT has maintained a minimum age requirement of 13, parents previously had no way to monitor or limit how their teenagers used the popular AI assistant. The timing reflects growing concerns about AI's impact on young people, particularly as chatbots become increasingly sophisticated and integrated into daily life. These controls offer families a structured approach to AI interaction, balancing teenage independence with parental oversight in an emerging digital landscape. How...
read Sep 26, 2025Fake AI song infiltrates Bon Iver side project’s official Spotify page
A fake AI-generated song has appeared on Volcano Choir's official Spotify page, despite the acclaimed Bon Iver side project being dormant since 2013. The incident highlights Spotify's ongoing struggle with AI-generated content, occurring just days after the platform announced new policies to combat "AI slop" that deceives listeners and diverts royalties from legitimate artists. What happened: A suspicious new single titled "Silkymoon Light" suddenly appeared on Volcano Choir's verified Spotify profile this week with no official announcement from the band or their label, Jagjaguwar. The track features robotic vocals that poorly imitate Justin Vernon's distinctive voice, singing generic lyrics like...
read Sep 25, 2025Southern Baptists release AI ministry guide for churches
The Ethics & Religious Liberty Commission (ERLC) has released a comprehensive guide titled "The Work of Our Hands: Christian Ministry in the Age of Artificial Intelligence," addressing how churches should navigate AI's growing influence across work, life, and relationships. Written by RaShan Frost, the ERLC's director of research, this resource builds on Southern Baptists' pioneering work in AI ethics and provides both theological frameworks and practical ministry applications for congregations grappling with artificial intelligence. Why this matters: As AI becomes increasingly integrated into daily life—from reasoning and decision-making to communications and learning—religious communities need guidance on how these technologies align...
read Sep 25, 2025AI translations threaten Wikipedia’s vulnerable language editions
AI-generated machine translations have flooded Wikipedia's smaller language editions with error-riddled content, creating a dangerous feedback loop as AI models train on these flawed pages. The problem is particularly acute for vulnerable languages with few native speakers, where up to 60% of Wikipedia articles are now uncorrected machine translations that could accelerate language extinction rather than preserve these cultural treasures. The scale of the problem: Machine-translated content has overwhelmed Wikipedia editions in hundreds of lesser-known languages, with devastating accuracy issues. Volunteers working on four African languages estimate that between 40% and 60% of articles in their Wikipedia editions are uncorrected...
read Sep 25, 2025Legal experts slam Bluebook’s new AI citation rule as confusing
The 22nd edition of The Bluebook, released in May, introduces Rule 18.3 for citing AI-generated content, but legal experts are calling the new citation standard fundamentally flawed and confusing. The Bluebook acts a foundational guide for the legal profession, offering best practices. Critics argue the new rule treats AI as a citable authority rather than a research tool, creating more confusion than clarity for legal professionals navigating AI citations. What the rule requires: Authors must save screenshots of AI output as PDFs when citing generative AI content like ChatGPT conversations or Google search results. The rule has three sections covering...
read Sep 25, 2025Microsoft cuts AI services to Israeli defense unit after West Bank surveillance findings
Microsoft has disabled cloud and AI services used by an Israeli defense unit after finding preliminary evidence supporting reports that the technology was being used for civilian surveillance in Gaza and the West Bank. The action follows an internal review triggered by Guardian reporting in August, marking a significant policy enforcement by the tech giant regarding the use of its services for mass surveillance activities. What happened: Microsoft's review found evidence supporting elements of the Guardian's reporting about Israel Defense Forces surveillance operations. The Guardian alleged that the IDF was using Microsoft's Azure cloud platform for collecting and storing data...
read Sep 25, 2025Spotify removes 75M spam tracks, introduces AI disclosure requirements
Spotify has announced new policies to combat "AI slop" in music streaming, introducing industry-standard AI disclosure requirements, stronger impersonation protections, and automated spam detection. The move comes as the platform removed over 75 million spammy tracks in the past year, with the company warning that harmful AI content "degrades the user experience for listeners and often attempts to divert royalties to bad actors." What you should know: Spotify is implementing three major changes to address AI misuse while maintaining support for legitimate AI-assisted music creation. The platform will use DDEX (Digital Data Exchange), an industry system for identifying and labeling...
read Sep 24, 202540% of workers receive AI-generated “workslop” that takes hours to fix
A new study reveals that workers are increasingly using AI to produce "workslop"—low-quality, AI-generated work that appears legitimate but lacks substance and requires others to fix or redo it. Research from BetterUp Labs, a coaching and development platform, and Stanford Social Media Lab found that 40% of 1,150 surveyed employees received workslop in the past month, with recipients spending nearly two hours cleaning up the mess. What you should know: Workslop represents a fundamental shift in workplace dynamics, where AI tools enable workers to offload cognitive work to their colleagues rather than genuinely improving productivity. The researchers define workslop as...
read Sep 24, 2025YouTube’s AI age verification restricts millions of accounts flagged as <18
YouTube has significantly expanded its AI-powered age verification system this week, with numerous users reporting their accounts have been restricted after being flagged as potentially under 18. The widespread rollout marks a major escalation from the limited testing that began in August, as Google leverages artificial intelligence to enforce stricter content controls for younger viewers. What you should know: YouTube's AI system analyzes account activity and longevity to estimate user age, automatically imposing restrictions on accounts it believes belong to minors. The age estimation model uses "a variety of signals such as YouTube activity and longevity of the account," according...
read Sep 24, 2025200+ world leaders demand AI safety consensus by end of 2025
Over 200 world leaders, Nobel laureates, and industry experts have co-signed an open letter demanding international consensus on AI safety measures by the end of 2025. The petition, released during the UN General Assembly, calls for "clear and verifiable red lines" to prevent "universally unacceptable risks" from artificial intelligence development. What they're saying: The letter emphasizes the urgent need for binding international agreements on AI safety protocols.• "An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks," the letter states, adding that safeguards should build upon "existing global frameworks and voluntary corporate commitments." Who...
read Sep 24, 2025California requires AI companies to disclose training data in 2026
California has passed Assembly Bill 2013, requiring generative AI developers to publicly disclose their training data starting January 1, 2026. The Generative Artificial Intelligence Training Data Transparency Act represents one of the most comprehensive U.S. rules on AI disclosure, potentially strengthening copyright lawsuits while raising compliance burdens for companies operating in the state. What you should know: The law mandates detailed public disclosures about datasets used to train AI models, including sources, availability, size, and whether copyrighted or personal data are included. Developers must publish information on their websites about data sources, whether datasets are publicly available or proprietary, their...
read Sep 23, 2025Google launches MCP Server to democratize AI data access
Google launched the Model Context Protocol Server to provide developers with standardized access to public data from its Data Commons knowledge graph without requiring complex API integrations. The server builds on Anthropic's open MCP standard and aims to reduce AI hallucinations by giving large language models access to trusted public datasets, potentially democratizing data access for AI development at an unprecedented scale. What you should know: The MCP Server simplifies how AI agents consume publicly available data by eliminating the need for developers to navigate complex APIs. Data Commons provides public datasets from trusted sources for AI developers, data scientists...
read Sep 23, 2025EU considers delaying AI Act enforcement by up to one year due to catch-up concerns
The European Union is considering a pause on enforcing key provisions of its landmark 2024 Artificial Intelligence Act, potentially delaying compliance requirements for high-risk AI systems by up to a year beyond the planned August 2025 deadline. This potential retreat marks a significant shift for the EU from being a global AI regulation leader to a region increasingly worried about falling behind the U.S. and China in the AI race. What you should know: The EU's tech chief Henna Virkkunen, the European Commission's executive vice president for tech sovereignty, has acknowledged that parts of the AI Act may need postponing...
read