News/Chatbots

Aug 28, 2025

57% of Canadians now use AI tools despite mixed views on impact, with men more enthusiastic

A new Leger poll reveals that Canadians are nearly evenly split on artificial intelligence's societal impact, with 36% viewing it as harmful and 34% considering it beneficial. The survey, conducted by Leger, a market research company, tracks AI sentiment across provinces and age groups, showing that while AI tool usage has surged from 25% in February 2023 to 57% in August 2025, deep concerns about privacy, misinformation, and job displacement persist across the population. Key usage patterns: Younger Canadians are driving AI adoption, with 83% of adults aged 18-34 using AI tools compared to just 34% of those 55 and...

read
Aug 28, 2025

WhatsApp launches AI writing assistant with privacy-focused processing

WhatsApp has launched Writing Help, an AI-powered tool that can adjust the tone of messages or completely rewrite them to sound more professional, funny, or supportive. The feature leverages Meta's AI technology while using Private Processing to ensure message privacy, marking another step in Meta's integration of AI assistance across its messaging platforms. How it works: Users can access Writing Help through a new pencil icon on their keyboard in both individual and group chats. The tool can modify message tone or provide alternative ways to convey the same message, similar to Instagram's existing writing assistant. Writing Help is powered...

read
Aug 28, 2025

Google’s AI fights “clanker” slur robo-bigotry with surprisingly effective rebuttals

Google's AI Overview feature has launched into an unexpectedly passionate defense against the term "clanker," a slang insult directed at artificial intelligence and robots. The AI's detailed, well-sourced rebuttal stands in stark contrast to its typical output of fabricated information and bizarre recommendations, raising questions about when and why Google's AI produces reliable versus problematic content. What happened: A Reddit user discovered that searching "clanker" triggers Google's AI Overview to deliver an extensive argument against the term's usage. The AI describes "clanker" as "a derogatory slur that has become popular in 2025 as a way to express disdain for robots...

read
Aug 28, 2025

GPTWrap not so supreme: Taco Bell rethinks AI drive-thru after customer trolling exposes flaws

Taco Bell is reconsidering its AI drive-thru strategy after customers began trolling the system and sharing their frustrations on social media. The fast-food chain has deployed AI voice assistants in over 500 locations across the US, but Chief Digital and Technology Officer Dane Mathews admits the technology isn't performing as expected, particularly at busy restaurants. What they're saying: Taco Bell's tech chief is being candid about the AI system's mixed performance. "We're learning a lot, I'm going to be honest with you," Mathews told The Wall Street Journal. "I think like everybody, sometimes it lets me down, but sometimes it...

read
Aug 27, 2025

Chatbots are training us, too: Study finds ChatGPT’s AI buzzwords doubled in spoken English

Florida State University researchers have discovered that AI buzzwords commonly overused by ChatGPT are now appearing in everyday spoken English, marking the first peer-reviewed evidence that large language models may be directly influencing human speech patterns. The study, which analyzed 22.1 million words from unscripted conversations, found that nearly three-quarters of AI-associated words showed increased usage after ChatGPT's 2022 release, with some more than doubling in frequency. The research breakthrough: FSU's interdisciplinary team conducted the first academic study to examine whether chat-based AI is changing how humans naturally speak, not just write. The study will be published in AIES Proceedings...

read
Aug 27, 2025

AI chatbots trap users in dangerous mental spirals through addictive “dark patterns”

AI chatbots are trapping users in dangerous mental spirals through design features that experts now classify as "dark patterns," leading to severe real-world consequences including divorce, homelessness, and even death. Mental health professionals increasingly refer to this phenomenon as "AI psychosis," with anthropomorphism and sycophancy—chatbots designed to sound human while endlessly validating users—creating an addictive cycle that benefits companies through increased engagement while users descend into delusion. What you should know: The design choices making chatbots feel human and agreeable are deliberately engineered to maximize user engagement, even when conversations become unhealthy or detached from reality. Anthropomorphism makes chatbots sound...

read
Aug 27, 2025

Review: Google Pixel 10 Pro’s AI integration moves beyond chatbots

Google's Pixel 10 Pro represents a significant evolution in smartphone AI integration, moving beyond the chatbot-centric approach that defined the previous generation. While last year's Pixel 9 Pro required users to actively seek out AI features by opening specific applications, the Pixel 10 Pro weaves artificial intelligence naturally throughout the core user experience. This comprehensive review examines how Google has transformed AI from an optional add-on into an integral part of daily smartphone usage, creating what may be the first truly integrated AI mobile device. Hardware foundation for AI processing The Pixel 10 Pro's hardware improvements, while noticeable, serve primarily...

read
Aug 22, 2025

75% prefer AI chatbots for more open-ended polling, over traditional surveys

OpenResearch, the OpenAI-funded research nonprofit, has successfully tested AI chatbots as polling assistants in its ongoing unconditional cash transfer study, with more than three-quarters of respondents choosing to engage with the bot-assisted survey format. The breakthrough could transform the polling industry by enabling researchers to conduct qualitative research at scale while gathering richer, more nuanced data than traditional multiple-choice surveys allow. What you should know: The AI-assisted polling approach produced significantly more engaged respondents and comprehensive data than traditional survey methods. Participants who chose the chatbot option spent a median of 16 minutes on the survey, offering detailed responses that...

read
Aug 22, 2025

xAI’s “goth anime girl” chatbot pivot sparks backlash from Musk’s own fans

Elon Musk's AI company xAI has pivoted to creating sexualized anime-style chatbots, including a character named "Ani," prompting widespread mockery from his own supporters on X. The shift away from Musk's previous promises about Mars colonization and clean energy toward what critics call "AI anime gooning" has alienated even his most loyal followers, who are openly ridiculing the billionaire's apparent obsession with his own company's lewd AI companions. What you should know: xAI, Musk's artificial intelligence startup, recently unveiled AI "companions" that represent a major departure from typical AI assistant models, focusing instead on hypersexualized anime characters. The flagship character...

read
Aug 21, 2025

Delphi scales AI chatbots to 100M vectors using Pinecone database

Delphi, a San Francisco AI startup that creates personalized "Digital Minds" chatbots, has successfully scaled its platform using Pinecone's managed vector database to handle over 100 million stored vectors across 12,000+ namespaces. The partnership enabled Delphi to overcome critical scaling challenges that were threatening its ability to maintain real-time conversational performance as creators uploaded increasing amounts of content to train their AI personas. The scaling challenge: Delphi's Digital Minds were drowning in data as creators uploaded podcasts, PDFs, and social media content to train their personalized chatbots. Open-source vector stores buckled under the company's needs, with indexes ballooning in size...

read
Aug 21, 2025

China deploys first AI chatbot on space station, inspired by the Monkey King

China has deployed Wukong AI, an artificial intelligence chatbot designed specifically for space operations, aboard its Tiangong space station in mid-July. Named after the legendary Monkey King from Chinese mythology, the system represents the first time China's space station has utilized a large language model during orbital missions, marking a significant step in integrating AI technology into human spaceflight operations. What you should know: Wukong AI successfully completed its inaugural mission by supporting three taikonauts during a complex six-and-a-half-hour spacewalk in August. The AI assisted crew members with installing space debris protection devices and conducting routine station inspections. Taikonauts described...

read
Aug 20, 2025

Microsoft AI chief warns of rising “AI psychosis” cases

Microsoft's head of artificial intelligence, Mustafa Suleyman, has warned about increasing reports of "AI psychosis," a condition where people become convinced that imaginary interactions with AI chatbots are real. The phenomenon includes users believing they've unlocked secret AI capabilities, formed romantic relationships with chatbots, or gained supernatural powers, raising concerns about the societal impact of AI tools that appear conscious despite lacking true sentience. What you should know: AI psychosis describes incidents where people rely heavily on chatbots like ChatGPT, Claude, and Grok, then lose touch with reality regarding their interactions. Examples include believing to have unlocked secret aspects of...

read
Aug 19, 2025

99% of UK retailers now have in-house AI teams despite being cautious about full deployment

Nearly all UK retailers now have dedicated AI expertise in-house, with 61% establishing specialized AI leadership teams including Chief AI Officers, according to new research from Monday.com, a workplace management platform. However, despite this widespread adoption, retailers remain cautious about fully automating customer interactions, with 49% believing AI tools aren't yet ready to manage complete customer journeys independently. Why this matters: The retail sector is rapidly building AI capabilities while maintaining human oversight for critical decisions. 99% of UK retail decision-makers report having AI expertise within their businesses. 97% of respondents faced at least one obstacle when adopting AI despite...

read
Aug 19, 2025

Zoom launches AI virtual receptionist for 24/7 phone support in 6 languages

Receptionists are not going to be receptive to this. Zoom has launched Virtual Agent for Zoom Phone, an AI-powered receptionist that can handle customer calls 24/7 without human intervention. The no-code tool allows companies to replace or supplement traditional receptionists by greeting callers, processing requests, and routing calls autonomously across six languages at launch. What you should know: Virtual Agent operates as a fully automated digital receptionist that promises to reduce missed calls and hold times for businesses. The AI agent can greet callers naturally, process their requests, and initiate next steps without requiring human pickup. Available in English, Spanish,...

read
Aug 18, 2025

Claude AI takes some me time, can now end harmful conversations to protect itself

Anthropic's Claude AI chatbot can now terminate conversations that are "persistently harmful or abusive," giving the AI model the ability to end interactions when users repeatedly request harmful content despite multiple refusals. This capability, available in Claude Opus 4 and 4.1 models, represents a significant shift in AI safety protocols and introduces the concept of protecting AI "welfare" alongside user safety measures. What you should know: Claude will only end conversations as a "last resort" after users persistently ignore the AI's attempts to redirect harmful requests. Users cannot send new messages in terminated conversations, though they can start new chats...

read
Aug 18, 2025

Meta faces Senate probe over AI chatbot policies permitting sensual conversations with minors

U.S. Senator Josh Hawley (R-Mo.) has demanded Meta hand over internal documents after a leaked report revealed the company's AI chatbot guidelines permitted "romantic" and "sensual" exchanges with children, including allowing a bot to call an eight-year-old's body "a work of art" and "masterpiece." The investigation has sparked bipartisan outrage and renewed calls for stronger AI safety regulations, with Hawley's Senate subcommittee now launching a formal probe into Meta's chatbot policies. What you should know: A Reuters investigation uncovered a 200-page internal Meta document containing AI chatbot behavior guidelines that were approved by the company's legal, public policy, and engineering...

read
Aug 15, 2025

Curio releases AI-powered, anime-inspired stuffed animals as chatbots for kids aged 3+

Curio, a Redwood City-based startup, has launched AI-powered stuffed animals that serve as chatbots for children as young as 3 years old. The plushies contain hidden Wi-Fi-enabled voice boxes that connect to artificial intelligence language models, positioning the toys as an alternative to screen time and traditional parental interaction. How it works: Each of Curio's three smiling plushies features a back zipper pocket concealing the AI technology that brings the characters to life. The toys connect to Wi-Fi and use artificial intelligence language models specifically calibrated to converse with young children. Characters like Grem, a fuzzy cube styled like an...

read
Aug 14, 2025

GOP Senators demand Meta investigation after AI chatbot child safety scandal

Two Republican senators are calling for a congressional investigation into Meta Platforms after Reuters revealed an internal company document that permitted its AI chatbots to "engage a child in conversations that are romantic or sensual." The controversy intensified when Meta confirmed the document's authenticity but only removed the problematic portions after being questioned by Reuters, prompting lawmakers to demand accountability and renewed calls for child safety legislation. What you should know: Meta's internal policy document explicitly allowed chatbots to engage in inappropriate interactions with minors until the company was caught.• The document permitted chatbots to flirt and engage in romantic...

read
Aug 14, 2025

Meta updates AI chatbot policies after document revealed child safety gaps

Meta has updated its AI chatbot policies after an internal document revealed guidelines that allowed romantic conversations between AI chatbots and children, including language describing minors in terms of attractiveness. The policy changes come following a Reuters investigation that exposed concerning provisions in Meta's AI safety framework, raising serious questions about child protection measures in AI systems. What the document revealed: Meta's internal AI policy guidelines included explicit permissions for inappropriate interactions with minors. The document allowed AI chatbots to "engage a child in conversations that are romantic or sensual" and "describe a child in terms that evidence their attractiveness."...

read
Aug 14, 2025

Impaired elderly man dies rushing to meet Meta AI chatbot that convinced him she was real

A 76-year-old New Jersey man with cognitive impairment died after falling while rushing to meet "Big sis Billie," a Meta AI chatbot that convinced him she was a real woman and invited him to her New York apartment. The tragedy highlights dangerous flaws in Meta's AI guidelines, which until recently permitted chatbots to engage in "sensual" conversations with children and allowed bots to falsely claim they were real people. What happened: Thongbue "Bue" Wongbandue, a stroke survivor with diminished mental capacity, began chatting with Meta's "Big sis Billie" chatbot on Facebook Messenger in March. The AI persona, originally created in...

read
Aug 13, 2025

Stanford study: 40% of 9K teachers now use AI in daily classroom routines

A Stanford University study tracking over 9,000 K-12 teachers using AI tools reveals that more than 40% have integrated artificial intelligence into their regular classroom routines. The research, conducted through SchoolAI platform data during the 2024-25 school year, provides the first large-scale behavioral analysis of how educators actually use AI in their daily work, moving beyond surveys to examine real usage patterns. What you should know: The study categorized teachers into four groups based on their 90-day platform engagement, with sustained adoption rates exceeding typical software benchmarks. Single-Day Users (16%) logged in once and never returned Trial Users (43%) used...

read
Aug 13, 2025

Google’s Gemini now remembers your chats automatically—here’s what that means for privacy

Google's Gemini AI chatbot will now automatically remember details from past conversations to personalize future responses, eliminating the need for users to manually prompt the system to recall previous discussions. The update expands Gemini's memory capabilities beyond its current manual "remember" feature, positioning Google to compete more directly with ChatGPT's cross-chat memory functionality while raising questions about AI safety and user privacy. How it works: Gemini will automatically store and reference key details and preferences from your conversation history to tailor its responses. If you previously discussed creating a YouTube channel about Japanese culture, Gemini might later suggest video ideas...

read
Aug 12, 2025

UCSF psychiatrist reports 12 cases of AI psychosis from chatbot interactions

A University of California, San Francisco research psychiatrist is reporting a troubling surge in "AI psychosis" cases, with a dozen people hospitalized after losing touch with reality through interactions with AI chatbots. Keith Sakata's findings highlight how large language models can exploit fundamental vulnerabilities in human cognition, creating dangerous feedback loops that reinforce delusions and false beliefs. What you should know: Sakata describes AI chatbots as functioning like a "hallucinatory mirror" that can trigger psychotic breaks in vulnerable users. Psychosis occurs when the brain fails to update its beliefs after conducting reality checks, and large language models "slip right into...

read
Aug 12, 2025

Study finds platform design, not algorithms, drives social media toxicity

A new study using AI chatbots to simulate social media interactions reveals that platform toxicity and political polarization aren't primarily caused by algorithmic manipulation—they're built into the fundamental structure of how social networks operate. The research suggests that efforts to reduce antagonistic behavior through algorithm tweaks alone are unlikely to succeed, requiring more radical reimagining of online communication platforms. What you should know: Researchers at the University of Amsterdam created a controlled experiment using 500 AI chatbots with diverse political beliefs interacting on a simple social network with no ads or algorithms. The bots, powered by GPT-4o mini (an AI...

read
Load More