News/AI Safety
Big Think AGI hype may be diverting focus from practical AI regulation needs
A new analysis argues that artificial general intelligence (AGI) hype from the AI industry serves as a strategic distraction that benefits companies by shifting policy focus away from immediate regulatory concerns. The argument suggests that by emphasizing existential AGI risks, the industry can operate with fewer constraints on current narrow AI applications while harvesting profits from controllable technologies. The core argument: Industry incentives align with promoting AGI-focused policies regardless of whether AGI actually emerges. If AGI doesn't happen, loose regulation allows companies to profit from narrow AI with minimal guardrails on issues like intellectual property, algorithmic transparency, or market concentration....
read Aug 15, 2025WIRED investigation finds 100+ YouTube channels using AI for fake celebrity videos
WIRED's investigation has uncovered over 100 YouTube channels using AI to create fake celebrity talk show videos that are fooling viewers despite their obvious artificial nature. These "cheapfake" videos use basic AI voiceovers and still images to generate millions of views, exploiting psychological triggers and YouTube's algorithm to monetize outrage-driven content. What you should know: These AI-generated videos follow predictable patterns designed to trigger emotional responses rather than fool viewers with sophisticated technology. The videos typically feature beloved male celebrities like Mark Wahlberg, Clint Eastwood, or Denzel Washington defending themselves against hostile left-leaning talk show hosts. Despite using only still...
read Aug 15, 2025Native artists build AI systems rooted in consent, not extraction
A new generation of Native American artists is leveraging artificial intelligence and technology to create installations that challenge Western assumptions about data extraction and consent. Led by artists like Suzanne Kite (Oglala Lakota), Raven Chacon (Diné), and Nicholas Galanin (Tlingít), this movement rejects extractive data models in favor of relationship-based systems that require reciprocal, consensual interaction rather than assumed user consent. What makes this different: These artists are building AI systems rooted in Indigenous principles of reciprocity and consent, fundamentally challenging how technology typically harvests and uses data. Unlike conventional AI that assumes consent through terms of service, these installations...
read Aug 15, 2025Annals of Atrophy: Doctors struggle with diagnoses after becoming AI dependent
A new study published in The Lancet Gastroenterology & Hepatology reveals that doctors who rely on artificial intelligence for medical procedures may be experiencing "deskilling"—a gradual loss of diagnostic abilities when the technology isn't available. Researchers found that experienced endoscopists (doctors who perform colonoscopies) became significantly less effective at detecting precancerous polyps during colonoscopies after becoming accustomed to AI assistance, with detection rates dropping from 28.4% to 22.4% when the technology was removed. What you should know: The study tracked experienced physicians across four endoscopy centers in Poland who alternately performed colonoscopies with and without AI assistance. All participants were...
read Aug 15, 2025OAN airs AI-generated soldier images without disclosure in Trump segment
OAN, a Trump-favored cable network, aired a segment praising increased female military recruitment while displaying four AI-generated images of women soldiers that appeared to be created using Elon Musk's Grok platform. The incident highlights growing concerns about misinformation and the unchecked use of synthetic media in partisan news coverage, particularly as AI-generated content becomes increasingly sophisticated and harder to detect. Key details: During Wednesday evening's broadcast, Defense Department spokeswoman Kingsley Wilson told host Matt Gaetz that female military recruits increased from "about 16,000 female recruits last year" to "upwards of 24,000" under the current administration.• Wilson credited the alleged improvement...
read Aug 14, 2025Google quietly expands Gemini data use for AI training—here’s how to opt out
Google is quietly expanding how it uses customer data to train its artificial intelligence models, and users who don't pay attention to their privacy settings might inadvertently become part of the training process. Starting September 2, files, photos, videos, and screen captures that users share with Gemini, Google's flagship AI assistant, could be sampled and used to improve the company's AI services. This represents a significant shift in how Google handles user-generated content within its AI ecosystem, bringing the search giant's data practices more in line with competitors like OpenAI. The change arrives as Google races to keep pace with...
read Aug 14, 2025Pointless privilege? MIT student drops out over fears AGI will cause human extinction
An MIT student dropped out of college in 2024, citing fears that artificial general intelligence (AGI) will cause human extinction before she can graduate. Alice Blair, who enrolled at MIT in 2023, now works as a technical writer at the Center for AI Safety, a nonprofit organization focused on reducing AI risks, and represents a growing concern among some students about AI's existential risks, even as the broader tech industry continues pushing toward AGI development. What she's saying: Blair's decision was driven by genuine fear about humanity's survival timeline in relation to AGI development. "I was concerned I might not...
read Aug 14, 2025GOP Senators demand Meta investigation after AI chatbot child safety scandal
Two Republican senators are calling for a congressional investigation into Meta Platforms after Reuters revealed an internal company document that permitted its AI chatbots to "engage a child in conversations that are romantic or sensual." The controversy intensified when Meta confirmed the document's authenticity but only removed the problematic portions after being questioned by Reuters, prompting lawmakers to demand accountability and renewed calls for child safety legislation. What you should know: Meta's internal policy document explicitly allowed chatbots to engage in inappropriate interactions with minors until the company was caught.• The document permitted chatbots to flirt and engage in romantic...
read Aug 14, 2025Meta updates AI chatbot policies after document revealed child safety gaps
Meta has updated its AI chatbot policies after an internal document revealed guidelines that allowed romantic conversations between AI chatbots and children, including language describing minors in terms of attractiveness. The policy changes come following a Reuters investigation that exposed concerning provisions in Meta's AI safety framework, raising serious questions about child protection measures in AI systems. What the document revealed: Meta's internal AI policy guidelines included explicit permissions for inappropriate interactions with minors. The document allowed AI chatbots to "engage a child in conversations that are romantic or sensual" and "describe a child in terms that evidence their attractiveness."...
read Aug 14, 2025Impaired elderly man dies rushing to meet Meta AI chatbot that convinced him she was real
A 76-year-old New Jersey man with cognitive impairment died after falling while rushing to meet "Big sis Billie," a Meta AI chatbot that convinced him she was a real woman and invited him to her New York apartment. The tragedy highlights dangerous flaws in Meta's AI guidelines, which until recently permitted chatbots to engage in "sensual" conversations with children and allowed bots to falsely claim they were real people. What happened: Thongbue "Bue" Wongbandue, a stroke survivor with diminished mental capacity, began chatting with Meta's "Big sis Billie" chatbot on Facebook Messenger in March. The AI persona, originally created in...
read Aug 13, 2025Executives, even more than rank-and-file workers, would use AI despite workplace restrictions
Nearly half of U.S. employees trust artificial intelligence more than their co-workers, according to a new Calypso AI survey of 1,000 office workers. The finding suggests AI is increasingly viewed as more reliable than human colleagues, with experts attributing this shift to years of inconsistent leadership, office politics, and unclear communication rather than blind faith in technology. What you should know: The survey reveals widespread willingness to circumvent company AI policies for perceived benefits. 52% of employees said they would use AI to make their job easier, even if it violated company policy. Among executives, this figure jumps to 67%...
read Aug 13, 2025YouTube’s AI age detection now requires ID from misidentified adults
YouTube will begin using artificial intelligence to automatically detect users' ages on Wednesday, requiring adults incorrectly flagged as minors to provide government ID, credit card information, or biometric data to prove their age. The system aims to prevent children from accessing inappropriate content, but privacy advocates and users are raising concerns about data security and the burden placed on adults who may be misidentified by the AI. How it works: The AI analyzes user behavior patterns to determine whether someone is under 18, regardless of the birthdate they provided when signing up. The system examines signals like video search patterns,...
read Aug 13, 2025ChatGPT health advice causes bromide poisoning in 60-year-old man
A 60-year-old man developed a rare condition called bromism after consulting ChatGPT about eliminating salt from his diet and subsequently taking sodium bromide for three months. The case, published in the Annals of Internal Medicine, highlights the risks of using AI chatbots for health advice and has prompted warnings from medical professionals about the potential for AI-generated misinformation to cause preventable health problems. What happened: The patient consulted ChatGPT after reading about the negative effects of table salt and asked about eliminating chloride from his diet. Despite reading that "chloride can be swapped with bromide, though likely for other purposes,...
read Aug 13, 2025Stanford professor disagrees with Hinton, champions human-centered AI over AGI race
Dr. Fei-Fei Li is pushing back against Silicon Valley's race toward artificial general intelligence (AGI), arguing instead for AI development centered on human collaboration and decision-making. Speaking at the Ai4 conference in Las Vegas, the Stanford professor and World Labs founder offered a stark contrast to warnings from Geoffrey Hinton, who told the same audience that AI safety might require programming machines with parental care instincts. What you should know: Li fundamentally rejects the distinction between AI and AGI, viewing current superintelligence debates as misguided. "I don't know the difference between the word AGI and AI. Because when Alan Turing...
read Aug 13, 2025Research shows AI companions damage children’s social skills with unrealistic expectations
Children who grow up with instant AI responses are struggling to develop patience and empathy needed for human relationships, according to research highlighting how artificial intelligence companions may be undermining essential social skills. This digital conditioning creates unrealistic expectations that friends and family should always be immediately available, potentially damaging children's ability to form meaningful connections as their brains continue developing until age 25. What you should know: AI companions provide unlimited, instant availability that real human relationships cannot match, creating problematic expectations for children. Unlike social media that still depends on human responses, AI systems offer truly instant, perpetual...
read Aug 13, 2025Google’s Gemini now remembers your chats automatically—here’s what that means for privacy
Google's Gemini AI chatbot will now automatically remember details from past conversations to personalize future responses, eliminating the need for users to manually prompt the system to recall previous discussions. The update expands Gemini's memory capabilities beyond its current manual "remember" feature, positioning Google to compete more directly with ChatGPT's cross-chat memory functionality while raising questions about AI safety and user privacy. How it works: Gemini will automatically store and reference key details and preferences from your conversation history to tailor its responses. If you previously discussed creating a YouTube channel about Japanese culture, Gemini might later suggest video ideas...
read Aug 12, 2025UCSF psychiatrist reports 12 cases of AI psychosis from chatbot interactions
A University of California, San Francisco research psychiatrist is reporting a troubling surge in "AI psychosis" cases, with a dozen people hospitalized after losing touch with reality through interactions with AI chatbots. Keith Sakata's findings highlight how large language models can exploit fundamental vulnerabilities in human cognition, creating dangerous feedback loops that reinforce delusions and false beliefs. What you should know: Sakata describes AI chatbots as functioning like a "hallucinatory mirror" that can trigger psychotic breaks in vulnerable users. Psychosis occurs when the brain fails to update its beliefs after conducting reality checks, and large language models "slip right into...
read Aug 12, 2025AI companies pivot to post-training tweaks as bigger models hit limits
OpenAI released GPT-5 last week after more than two years of development, but early reviews suggest the model represents only incremental improvements rather than the dramatic leap many expected. The lukewarm reception has intensified questions about whether the AI industry's foundational belief in "scaling laws"—the idea that larger models trained on more data inevitably produce better results—may be breaking down, forcing companies to reconsider their path toward artificial general intelligence. The big picture: The AI industry's confidence in scaling laws stems from a 2020 OpenAI paper predicting that language models would improve dramatically as they grew larger, a theory that...
read Aug 12, 2025Creators are trying to roofie AI bots to prevent crawling and unauthorized training
Web-browsing bots now account for the majority of internet traffic for the first time, with AI company crawlers like ChatGPT-User and ClaudeBot representing 6% and 13% of all web traffic respectively. Content creators are fighting back with "AI poisoning" tools that corrupt training data, but these same techniques could be weaponized to spread misinformation at scale. The big picture: The battle between AI companies scraping data and content creators protecting their work has escalated beyond legal disputes into a technological arms race that could reshape how information flows across the internet. Key details: Major AI companies argue data scraping falls...
read Aug 12, 2025The new NAFTA? US labor unions push state laws to restrict AI in workplaces
Today in Obvious: Labor unions across the United States are mobilizing to push state-level legislation that would restrict how artificial intelligence is deployed in workplaces, targeting everything from autonomous vehicles to AI-powered hiring decisions. This coordinated effort comes after federal attempts to regulate AI stalled, leaving states as the primary battleground for determining how workers will be protected from potential job displacement and algorithmic bias. The big picture: The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), a federation of 63 national and international labor unions, launched a national task force last month to work with state lawmakers...
read Aug 12, 2025YouTube’s AI age verification sparks 50K-signature privacy backlash
YouTube faces mounting backlash from tens of thousands of users protesting its new AI-powered age verification system, with a Change.org petition rapidly approaching 50,000 signatures. The system analyzes viewing habits to identify users under 18, then requires government ID, credit card, or selfie verification to lift content restrictions—a move critics argue threatens privacy and digital freedom. What you should know: YouTube's AI estimates user ages by analyzing viewing patterns, search behavior, and account longevity, automatically restricting accounts it deems underage. Users flagged as under 18 face disabled personalized ads, mandatory digital wellbeing tools, and limits on repetitive content viewing. To...
read Aug 12, 2025AI systems repeat the same security mistakes as 1990s internet
Cybersecurity researchers at Black Hat USA 2025, the world's premier information security conference, delivered a sobering message: artificial intelligence systems are repeating the same fundamental security mistakes that plagued the internet in the 1990s. The rush to deploy AI across business operations has created a dangerous blind spot where decades of hard-learned cybersecurity lessons are being forgotten. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password, a leading password management company. "We're also getting a whole new crop of people...
read Aug 12, 2025Character.AI pivots from AGI to entertainment with 20M monthly users
Character.AI has pivoted from its original mission of building artificial general intelligence to focus on AI entertainment, with new CEO Karandeep Anand announcing the company now serves 20 million monthly active users who spend an average of 75 minutes daily on the platform. The strategic shift comes after Google's $2.7 billion licensing deal last August and mounting safety concerns following a wrongful death lawsuit, positioning the startup to compete in the rapidly growing AI entertainment market rather than the costly AGI development race. What you should know: Character.AI has fundamentally changed its business model and technical approach under new leadership....
read Aug 12, 2025Claude Sonnet 4 expands to 1M tokens for enterprise coding
Anthropic announced that Claude Sonnet 4 can now process up to 1 million tokens of context in a single request—a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available in public beta through Anthropic's API and Amazon Bedrock, represents a significant leap in how AI assistants can handle complex, data-intensive tasks while positioning the company to defend its 42% share of the AI code generation market against intensifying competition from OpenAI and Google. What you should know: The expanded context capability enables developers to load...
read