News/Deepfakes

May 23, 2025

Our Brand is Crisis: AI-driven misinformation surge is a boon for elite PR professionals

The rise of AI-generated misinformation threatens to fundamentally alter our ability to discern truth from fiction, creating unprecedented challenges for reputation management and public discourse. As artificial intelligence advances more rapidly than our capacity to understand its implications, public relations professionals, particularly those specializing in crisis management, are positioned to become critical defenders against an impending wave of synthetic media that can destroy reputations in minutes. This emerging reality signals a transformative shift where reputation protection will evolve from optional to essential in navigating an increasingly complex information landscape. The big picture: Artificial intelligence is evolving faster than society's ability...

read
May 21, 2025

Google’s new Veo 3 AI video generation is scary good and shockingly impressive

Google's Veo 3 video generation engine represents a significant leap in AI-generated content, blurring the line between synthetic and authentic media. By adding synchronized audio capabilities and enhanced visual fidelity to AI-generated videos, Google has created a tool that produces content nearly indistinguishable from human-created footage. This technological advancement signals a concerning new phase in the evolution of digital misinformation, as these hyperrealistic AI videos could potentially deceive viewers and further complicate an internet landscape already struggling with truth and authenticity. The big picture: Google's newly announced Veo 3 engine combines synchronized audio with enhanced video generation to create remarkably...

read
May 20, 2025

AI voice scams target US officials at federal, state level to steal data

The FBI is warning about sophisticated smishing campaigns targeting current and former government officials that use AI-generated voices and social engineering techniques to steal sensitive information. This escalation represents a concerning evolution in government-targeted scams, as cybercriminals impersonate senior officials to establish trust before directing victims to malicious links that compromise personal accounts. The big picture: Since April, cybercriminals have been targeting U.S. federal and state employees with texts and AI-generated voice messages that impersonate senior officials to establish rapport and ultimately gain access to sensitive information. Once scammers compromise one account, they use the stolen information to target additional...

read
May 19, 2025

The new federal law that makes AI-generated deepfakes illegal

The Take It Down Act marks a pivotal federal response to the proliferation of AI-generated explicit imagery, creating the first nationwide protections against non-consensual deepfakes. After high-profile victims from celebrities to high school students suffered from having their faces superimposed onto nude bodies, this bipartisan legislation establishes clear criminal penalties and platform responsibilities. This rare moment of congressional unity illustrates how certain AI harms can transcend political divisions, particularly when targeting vulnerable individuals. The big picture: President Trump is set to sign the Take It Down Act on Monday, establishing federal protections against non-consensual explicit images regardless of whether they're...

read
May 16, 2025

Pakistan deputy PM faces backlash over fake PAF image in Parliament

Pakistan's Deputy Prime Minister Ishaq Dar has triggered widespread criticism after citing a fake AI-generated newspaper image during a parliamentary speech, falsely attributing praise of Pakistan's air force to the UK's Daily Telegraph. The incident highlights the growing challenge of misinformation during periods of international tension, particularly as Pakistan and India exchange conflicting claims about recent military engagements without providing conclusive evidence to support their assertions. The big picture: A senior Pakistani government official publicly cited a fake AI-generated news clipping as legitimate evidence of international recognition for Pakistan's military capabilities. During a parliamentary session, Deputy Prime Minister and Foreign...

read
May 14, 2025

Is AI, like radio and social media before it, a “threat” to democracy?

Artificial intelligence's rapid advancement presents a looming threat to democratic institutions, far beyond concerns about labor or dignity. Pope Leo XIV's recent warning about AI's challenges to humanity only scratches the surface of a deeper danger: the potential weaponization of these powerful technologies by authoritarian interests to systematically undermine democratic processes. As AI systems scale in capability and reach, they represent unprecedented tools for manipulation, surveillance, and disinformation that could fundamentally destabilize democratic societies unless rigorous regulatory frameworks are established. The big picture: AI represents the latest evolution in the authoritarian playbook, following radio in the 1930s and social media...

read
May 12, 2025

Meta removes AI-generated Jamie Lee Curtis ads after star’s appeal

Meta's swift response to Jamie Lee Curtis's direct appeal demonstrates how celebrities are increasingly battling unauthorized AI-generated content that misrepresents them on social media platforms. The incident highlights both the growing challenge of deepfakes for public figures and the accountability tech companies face in policing AI misuse on their platforms, particularly when it involves unauthorized commercial appropriation of recognizable personalities. The big picture: Meta removed fake AI-generated ads featuring Jamie Lee Curtis after she directly appealed to CEO Mark Zuckerberg on Instagram to take them down. Curtis described the unauthorized content as "some bullshit that I didn't authorize, agree to...

read
May 9, 2025

YouTube users, including Premium subscribers, demand “block channel” feature

YouTube's limited content filtering options leave users exposed to AI-generated content and unwanted channels, highlighting a growing tension between Google's AI ambitions and user experience. The platform's current "Don't recommend channel" feature fails to provide comprehensive blocking capabilities, frustrating even paying Premium subscribers who seek greater control over their viewing environment. The big picture: YouTube's recommendation algorithm effectively suggests new content but offers no true "block channel" option, forcing users to repeatedly encounter unwanted content in search results and other areas of the platform. The current "Don't recommend channel" feature only prevents suggestions on the Home page while allowing the...

read
May 7, 2025

Afterlife AI? Arizona court presents synthetic video of murder victim forgiving killer

An AI-generated victim impact statement has made judicial history in Arizona, marking a watershed moment for artificial intelligence in the legal system. Using video footage and a script written by his sister, Christopher Pelkey's AI-generated persona addressed and forgave his killer from beyond the grave. This unprecedented use of AI in court proceedings has sparked discussions about the broader implications of synthetic media in the justice system, as courts scramble to establish guidelines for this rapidly evolving technology. The breakthrough case: An Arizona judge heard what officials believe is the nation's first AI-generated victim impact statement in a murder sentencing,...

read
May 7, 2025

Cybercrime-as-a-Service? AI tool Xanthorox enables illicit activity for novices

A sophisticated AI platform designed specifically for criminal activities has emerged from the shadows of the dark web into surprisingly public channels. Xanthorox represents a troubling evolution in cybercrime-as-a-service, offering on-demand access to deepfake generation, phishing tools, and malware creation through mainstream platforms like Discord and Telegram. This development signals how criminal AI tools are becoming increasingly accessible and commercialized, blurring lines between underground hacking communities and everyday technology spaces. The big picture: Despite its ominous purpose, Xanthorox operates with surprising transparency, maintaining public profiles on GitHub, YouTube, and communication platforms where subscribers can pay for access using cryptocurrency. The...

read
May 3, 2025

Trump shares AI-generated image of himself as pope amid Vatican transition

President Trump's sharing of an AI-generated image depicting himself as pope comes at a particularly sensitive time for the Catholic Church, as it prepares for the papal conclave following Pope Francis's death. This provocative social media post follows Trump's recent joke about wanting to be pope himself and represents another instance of the president using AI-generated imagery on his Truth Social platform to blur the lines between reality and fiction. The big picture: Trump shared an AI-generated image portraying himself as the pope on Truth Social without providing any explanation or context for the post. The image appeared Friday evening...

read
May 1, 2025

Novel idea: BBC Studios offers AI Agatha Christie writing course

BBC Studios is resurrecting Agatha Christie through AI technology, creating a digital likeness of the renowned mystery author for an educational series on crime novel writing. This unprecedented use of deepfake technology for creative education raises intriguing questions about digital resurrection for educational purposes, blending human expertise with AI to preserve the wisdom of literary masters who are no longer with us. The big picture: BBC Studios has launched an AI-recreated version of mystery writer Agatha Christie to teach aspiring authors how to craft crime novels through its online education platform. The digital recreation combines the performance of actor Vivien...

read
May 1, 2025

AI-powered romance scams target Boomers, but younger generations more defrauded

Real-time AI deepfakes are creating a dangerous new frontier in internet scams, particularly targeting vulnerable populations like the elderly. Fraudsters are now using generative AI technology to alter their appearance and voices during live video conversations, allowing them to convincingly impersonate trusted individuals or create attractive fake personas. This evolution of scam technology is making even video verification—once considered relatively secure—increasingly unreliable as a means of establishing someone's true identity. The big picture: Scammers are deploying sophisticated AI filters during live video calls to completely transform their appearance and voice, creating nearly undetectable fake identities. A recent investigation by 404...

read
Apr 30, 2025

Former athletic director jailed for racist AI-generated recording

The use of AI to create deepfake content has reached a disturbing legal landmark with the sentencing of a school official who weaponized the technology for personal retaliation. This case highlights the real-world consequences of AI misuse in educational settings and establishes precedent for criminal penalties when synthetic media is deployed to harm reputations and disrupt institutions. The verdict: A former Baltimore-area high school athletic director received a four-month jail sentence after pleading guilty to creating a racist and antisemitic deepfake audio impersonating the school's principal. Dazhon Darien, 32, entered an Alford plea to the misdemeanor charge of disturbing school...

read
Apr 29, 2025

The rise of Deepfake job candidates

The job market faces a new threat as AI-generated applicants compete with human job seekers, creating significant security risks and additional hurdles in an already challenging employment landscape. Cybersecurity experts have identified sophisticated scammers using AI to create fake identities complete with generated headshots, résumés, and websites tailored to specific job openings—sometimes successfully securing positions where they can steal trade secrets or install malware. The big picture: AI-powered job application scams represent a growing cybersecurity threat targeting companies through their hiring processes. Scammers are using artificial intelligence to create convincing fake applicants with custom-tailored résumés and identities designed to match...

read
Apr 28, 2025

AI-generated child nudity prompts call for app ban in UK

The UK children's commissioner is calling for a government ban on AI applications capable of creating explicit fake images of children, highlighting the growing threat of deepfake technology to young people's safety and privacy. This push comes amid increasing concerns about AI tools that can digitally remove clothing from photos or generate sexually explicit deepfakes, disproportionately targeting girls and young women who are now modifying their online behavior to avoid victimization. The big picture: Dame Rachel de Souza, England's children's commissioner, is demanding immediate government action against AI "nudification" apps that generate sexually explicit images of children. These applications can...

read
Apr 26, 2025

Celine Dion’s team slams unauthorized AI-generated music

Celine Dion's team has taken a public stand against AI-generated music falsely attributed to the legendary singer, highlighting the growing challenge performers face in protecting their artistry in the age of artificial intelligence. Despite her ongoing battle with Stiff Person Syndrome, which has limited her recording and touring activity since 2022, Dion has made several high-profile appearances showcasing her enduring vocal talent, making the unauthorized AI replications particularly concerning for her team and fans. The warning: Dion's representatives issued a formal statement on Instagram addressing unauthorized AI-generated songs circulating online that falsely claim to feature the singer's voice. "These recordings...

read
Apr 25, 2025

Job search AI deepfake detection: 5 tips for hiring managers

The expanding threat of AI deepfakes has now infiltrated the hiring process, with sophisticated technology enabling bad actors to impersonate job candidates in video interviews. These fraudulent applicants seek to gain employment at companies—particularly tech firms with valuable intellectual property and remote positions—to access sensitive systems, steal data, or install malware. With cybersecurity researchers warning that deepfakes can be created in just over an hour and predictions that one in four job candidates will be fake by 2028, organizations must develop comprehensive strategies to identify these increasingly convincing impostors. 1. Request actions that challenge AI limitations When interviewing remote candidates,...

read
Apr 23, 2025

Oregon lawmakers crack down on AI-generated fake nudes

Oregon is taking decisive action against AI-generated deepfake pornography with a new bill that would criminalize the creation and distribution of digitally altered explicit images without consent. The unanimous House vote signals growing recognition of how artificial intelligence can weaponize innocent photos, particularly affecting young people who may have their social media images manipulated and distributed as fake nudes. This legislation reflects a nationwide trend as states race to update revenge porn laws for the AI era. The big picture: Oregon lawmakers voted 56-0 to expand the state's "revenge porn" law to include digitally created or altered explicit images, positioning...

read
Apr 18, 2025

AI art can’t go on: Celine Dion alerts fans to AI-generated song scams online

Celine Dion's warning about unauthorized AI songs impersonating her voice highlights growing tensions in the music industry around artificial intelligence. Her public statement comes amid broader industry pushback, with hundreds of prominent artists recently signing an open letter against AI threats to artistic integrity and compensation. This development reflects the music world's struggle to address emerging tensions between technological innovation and artists' rights as AI voice cloning becomes increasingly sophisticated. The warning: Celine Dion took to Instagram to alert fans about fake AI-generated songs falsely attributed to her circulating online. "These recordings are fake and not approved, and are not songs...

read
Apr 17, 2025

Google AI blocked 3X more advertising fraud in 2024

Google's increased use of AI models to combat fraudulent advertising has achieved unprecedented results, with suspended accounts tripling and deepfake scam ads plummeting by 90% in 2024. This application of large language models (LLMs) represents one of the most broadly beneficial implementations of AI technology to date, showing how advanced models can be deployed to protect users from digital threats while maintaining advertising ecosystems. The big picture: Google deployed over 50 enhanced LLMs to enforce its advertising policies in 2024, with AI now handling 97% of ad enforcement actions. These models can make determinations with less data than previous systems,...

read
Apr 12, 2025

Leak exposes 95,000 AI-generated explicit images, including child abuse material

An unsecured database has exposed tens of thousands of AI-generated explicit images, including content depicting minors, highlighting the destructive potential of unregulated image generation technology. The leak from South Korean company GenNomis reveals how these tools can be weaponized to create harmful, non-consensual content targeting real individuals, adding to growing concerns about AI safety and the proliferation of deepfake technology that victimizes women and children. The big picture: An open database belonging to South Korean AI firm GenNomis leaked over 95,000 records containing explicit AI-generated images, including child sexual abuse material and de-aged celebrities. Security researcher Jeremiah Fowler discovered the...

read
Apr 9, 2025

Google reports 344 complaints of AI-generated harmful content via Gemini

Only 344? Google has disclosed receiving hundreds of reports regarding alleged misuse of its AI technology to create harmful content, revealing a troubling trend in how generative AI can be exploited for illegal purposes. This first-of-its-kind data disclosure provides valuable insight into the real-world risks posed by generative AI tools and underscores the critical importance of implementing effective safeguards to prevent creation of harmful content. The big picture: Google reported receiving 258 complaints that its Gemini AI was used to generate deepfake terrorism or violent extremist content, along with 86 reports of alleged AI-generated child exploitation material. Key details: The...

read
Apr 7, 2025

South Korean AI startup shuts down, disappears after database exposed deepfake porn images

That breeze coming from the south of the peninsula is an AI startup in the wind... The explosive growth of AI-generated explicit content has reached a disturbing milestone with South Korean company GenNomis shutting down after researchers discovered an unsecured database containing thousands of non-consensual pornographic deepfakes. This incident highlights the dangerous intersection of accessible generative AI technology and inadequate regulation, creating serious harm particularly for women who constitute most victims of these digital violations. The big picture: A South Korean AI startup called GenNomis abruptly deleted its entire online presence after a researcher discovered tens of thousands of AI-generated...

read
Load More