back
Get SIGNAL/NOISE in your inbox daily

A viral TikTok trend called the “AI homeless man prank” involves users creating fake AI-generated images of homeless individuals appearing to break into homes, then sending these images to family members to simulate false home invasions. The trend has spread across multiple social media platforms and prompted warnings from police departments in the U.S., UK, and Ireland about wasting emergency resources and potentially creating dangerous situations when officers respond to fake burglary calls.

The scale of the problem: The trend has gained massive traction across social media platforms, with millions of users participating and law enforcement agencies responding to false reports.

  • Rae Spencer’s original TikTok video from St. Augustine, Florida, has received over 5 million likes.
  • The hashtag #homelessmanprank has populated more than 1,200 videos on TikTok.
  • The trend has spread to Snapchat and Instagram, with users also posting tutorials on creating the fake images.
  • Police departments in Massachusetts, Washington, Texas, Ohio, the UK, and Ireland have all issued public warnings.

Law enforcement response: Authorities are treating the pranks as serious crimes rather than harmless jokes, with some jurisdictions bringing criminal charges against participants.

  • Salem, Massachusetts police warned that the prank “dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources.”
  • Brown County, Ohio sheriff’s department stated: “We want to be clear: this behavior is not a ‘prank’ — it is a crime,” and criminally charged two juveniles involved in separate incidents.
  • Round Rock, Texas police responded to two home invasion calls in a single weekend, both stemming from prank texts, with one call coming from a mother who “believed it was real.”

The broader AI deception concern: The trend highlights growing challenges with AI-generated content’s ability to deceive people, particularly as the technology becomes more sophisticated.

  • The proliferation of photorealistic AI generators has created an internet full of fake media that often fools users, especially older internet users.
  • OpenAI’s recent Sora 2 release demonstrated the ability to create realistic security footage of CEO Sam Altman stealing from Target, raising concerns about mass manipulation campaigns.
  • AI generators typically include watermarks, but users can easily crop them out.

Platform responses vary: Different AI platforms have inconsistent policies regarding generating potentially harmful content related to homelessness.

  • When asked to generate an image of a homeless person in someone’s home, OpenAI’s ChatGPT refused, saying it “would involve depicting a real or implied person in a situation of homelessness, which could be considered exploitative or disrespectful.”
  • Google’s Gemini responded: “Absolutely. Here is the image you requested.”
  • TikTok added labels to flagged videos to clarify they contain AI-generated content and referred to community guidelines requiring creators to label AI-generated content.

What law enforcement is saying: Police officials emphasize the serious consequences and resource drain caused by these pranks.

  • “Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation,” Salem police officials wrote.
  • Andy McKinney from Round Rock Police warned: “You know, pranks, even though they can be innocent, can have unintended consequences… a real-life incident could be happening with one of their neighbors, and they’re draining resources.”

Legal implications: While no specific laws directly address this type of AI misuse, existing statutes may apply to prosecute offenders.

  • Massachusetts law penalizes “Willful and malicious communication of false information to public safety answering points.”
  • Some jurisdictions are treating incidents as educational opportunities, encouraging parents to discuss online trend dangers with their children.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...