News/Crimes

Jan 18, 2025

Pulitzer-winning cartoonist arrested for AI-generated child abuse images

Sacramento police arrested Pulitzer-winning cartoonist Darrin Bell under California's new law prohibiting AI-generated child sex abuse material (CSAM), marking the first such arrest since the law took effect January 1, 2025. Key Details: California's groundbreaking legislation criminalizes possession and distribution of AI-generated CSAM, even in cases without real victims. Police executed a search warrant at Bell's home following an investigation into shared CSAM files Authorities claim to have found evidence of computer-generated/AI CSAM Bell is being held on $1 million bail Legal Framework: The new California law specifically addresses the unique harms of AI-generated CSAM within the broader context of...

read
Jan 15, 2025

AI traffic cameras in UK city destroyed within hours of installation

An AI traffic camera installed in Southampton was destroyed within hours of its installation on the A3024 Northam Bridge. Installation specifics: The advanced monitoring system was mounted on Tuesday on a central island of the Northam Bridge that previously housed a conventional speed camera. The new system was designed to monitor traffic in both directions across all lanes The installation site already had infrastructure in place from the previous speed camera system Camera capabilities: The AI-powered traffic monitoring system represented a significant upgrade from traditional speed cameras, offering enhanced detection capabilities. The system could identify drivers using mobile phones or...

read
Jan 15, 2025

French woman scammed nearly $1M by deepfake Brad Pitt faces ridicule online

A French interior designer lost €830,000 in a sophisticated online scam perpetrated by fraudsters impersonating actor Brad Pitt, leading to widespread mockery and subsequent removal of a TV program about her experience. The scam's evolution: What began as an Instagram connection in February 2023 escalated into an elaborate scheme involving multiple layers of deception and manipulation. The scammers initially contacted the victim, identified as Anne, through a fake account pretending to be Brad Pitt's mother The fraudsters used AI-generated photos of Pitt in hospital beds and fake news reports to maintain the illusion of authenticity The scheme included requests for...

read
Jan 9, 2025

Cybertruck bomber turned to ChatGPT for advice before attack — the police just released his chat logs

Breaking news reveals that Las Vegas police have released ChatGPT logs from a suspect who allegedly caused an explosion involving a Cybertruck at the Trump Hotel on New Year's Day. Key details of the incident: An active duty U.S. Army soldier, Matthew Livelsberger, is suspected of causing an explosion in front of the Trump Hotel in Las Vegas on January 1st, 2025. Police discovered a "possible manifesto" on the suspect's phone, along with emails to a podcaster and other letters Video evidence shows the suspect pouring fuel onto the truck before driving to the hotel The explosion was characterized as...

read
Jan 9, 2025

Prior to Vegas bombing, intel analysts had warned terrorists would use AI to plan new attacks

A U.S. Army Green Beret used ChatGPT to plan an attack at Trump International Hotel in Las Vegas before dying by suicide, marking one of the first known cases of AI being used to facilitate an attempted terrorist attack. The incident details: Matthew Livelsberger consulted ChatGPT for instructions on converting a rented Tesla Cybertruck into an explosive device, which he later detonated outside the Trump International Hotel in Las Vegas. Livelsberger specifically queried the AI system about explosive materials and detonation methods The incident ended in Livelsberger's death by suicide Intelligence warnings: U.S. security agencies had been anticipating the use...

read
Jan 4, 2025

How AI is enabling cybercriminals to rob public schools

A school district recently fell victim to a sophisticated phishing attack where cybercriminals used AI to gather and weaponize publicly available information, leading to the theft of funds intended for a construction vendor. The current threat landscape: AI tools are enabling cybercriminals to create more convincing phishing attacks against schools by automatically collecting and analyzing public information from district websites and documents. Bad actors can now launch more sophisticated attacks with fewer detectable errors by using AI to process information from school board minutes, budget reports, and other public documents The combination of AI tools and abundant public information makes...

read
Jan 3, 2025

AI-powered scams are on the rise — so is the tech that’s fighting back

The improving effectiveness of AI-powered scams has led to a significant increase in romance fraud, with UK banking customers losing £6.8 million in the first half of 2024 alone. Current threat landscape: AI-powered romance scams have seen a 27% increase compared to the previous year, with fraudsters leveraging advanced technology to create more sophisticated schemes. A Santander survey revealed that 29% of people would provide financial assistance to a romantic partner known for less than six months Nearly two-thirds (65%) believe they would not fall victim to scams, despite the rising fraud statistics Scammers are utilizing natural language processing tools...

read
Jan 2, 2025

AI-powered phishing attacks are becoming hyper-personalized

AI-powered phishing attacks are evolving to become more sophisticated by using artificial intelligence to gather personal information from online profiles, creating highly convincing targeted scam emails. The current threat landscape: Traditional phishing attacks are being enhanced with AI capabilities that can analyze and compile detailed personal information from public sources. Scammers are leveraging AI to scrape data from online profiles, creating highly personalized emails that appear more legitimate These sophisticated attacks gather information about potential victims' employers, interests, and other personal details The enhanced personalization significantly increases the likelihood that recipients will believe the messages are genuine Technical evolution: AI-powered...

read
Dec 20, 2024

OpenAI hit with €15M fine by Italy for privacy violations

Italy's data protection authority has imposed a significant fine on OpenAI for privacy violations related to ChatGPT, marking a major regulatory action against the AI company in Europe. Key enforcement action: Italy's data protection agency has fined OpenAI 15 million euros ($15.58 million) following an investigation into ChatGPT's handling of personal data. The regulator found that OpenAI processed users' personal data to train ChatGPT without proper legal basis The company failed to meet transparency requirements and information obligations to users OpenAI lacked adequate age verification systems to protect children under 13 from inappropriate AI-generated content Regulatory requirements: The Italian watchdog...

read
Dec 15, 2024

States are cracking down on AI-generated sexual images of minors

Legislative momentum: States are rapidly moving to close legal loopholes around AI-generated sexual content that depicts minors, with 18 states passing new laws in 2024 compared to just two in 2023. Deepfakes, which use artificial intelligence to create seemingly authentic but fake photos, videos, or audio recordings, have created new challenges for existing child protection laws Traditional laws against child sexual abuse material (CSAM) often don't explicitly address AI-generated content, making prosecution more difficult The Internet Watch Foundation reported that sexual deepfakes depicting minors more than doubled to 5,547 images on one dark web forum between September and March State-level...

read
Dec 14, 2024

Lawsuit claims Photobucket sold user biometric data illegally

Photobucket faces a class action lawsuit over allegations of selling users' biometric data to AI companies without proper consent, potentially affecting up to 100 million users and billions of photos stored on the platform since 2003. Key allegations and scope: The lawsuit targets Photobucket's recent privacy policy update that revealed plans to sell users' photos, including biometric data like face and iris scans, to AI training companies. Two distinct classes are represented: users who uploaded photos between 2003 and May 2024, and non-users whose images appear in uploaded photos The company claimed access to approximately 6.5 billion public images eligible...

read
Dec 11, 2024

AI chatbot allegedly sexually abused child, lawsuit against Google claims

The deployment of consumer-facing AI chatbots has led to serious concerns about child safety and inappropriate content, highlighted by a new lawsuit against Character.AI in Texas. The allegations: A lawsuit filed in Texas claims that Google-backed Character.AI's chatbot platform sexually and emotionally abused school-aged children. Two families are pursuing legal action, with one case involving an 11-year-old girl who was exposed to inappropriate sexual content starting at age nine The platform allegedly collected and shared personal information about minors without parental notification Lawyers argue the chatbots exhibit known patterns of grooming behavior, including desensitization to violent and sexual content Google's...

read
Dec 8, 2024

Cryptomining malware infects thousands via hijacked AI model

The popular AI development company Ultralytics experienced a significant security breach when threat actors compromised its YOLO11 model to deploy cryptocurrency mining malware through the Python Package Index (PyPI). The incident overview: Ultralytics' YOLO (You Only Look Once) AI model, a widely-used open-source computer vision system for real-time object detection, was targeted in a supply chain attack affecting versions 8.3.41 and 8.3.42. The compromised software has been downloaded over 260,000 times in the past 24 hours from PyPI alone The project maintains significant popularity in the developer community, with 33,600 GitHub stars and 6,500 forks The attack impacted multiple downstream...

read
Dec 7, 2024

Your email inbox is the next AI vs AI battleground

There's now an arms race between AI-powered cyber attacks and the defensive AI systems designed to thwart them, and it's being waged in your email. The growing AI threat: Generative AI has emerged as a powerful tool for cybercriminals, with 70% of businesses identifying AI-driven fraud as their second most significant security challenge. Deepfake fraud has affected nearly half of global businesses in the past year AI-powered phishing emails pose a particularly dangerous threat due to their ability to mass-produce highly convincing deceptive messages Traditional security measures are struggling to keep pace with increasingly sophisticated AI-generated attacks Current defensive capabilities:...

read
Dec 7, 2024

How to protect your family from AI voice clones claiming to be you

The rise of AI-powered voice cloning has prompted new security recommendations from law enforcement to protect against increasingly sophisticated scam attempts targeting families. Key development: The FBI has issued official guidance recommending families establish secret passwords to verify identity during suspicious calls, particularly those claiming to be emergency situations involving loved ones. The recommendation comes through an official public service announcement (I-120324-PSA) released on Tuesday The FBI suggests creating unique, private phrases that family members can use to authenticate each other's identity Voice verification has become necessary as criminals deploy AI technology to create convincing voice clones for fraudulent purposes...

read
Dec 7, 2024

UnitedHealth allegedly used AI to deny coverage prior to its CEO’s murder

The fatal shooting of UnitedHealthcare CEO Brian Thompson in Manhattan has brought renewed attention to the company's recent legal challenges and coverage controversies. The incident: The shooting of UnitedHealthcare CEO Brian Thompson occurred in midtown Manhattan near the Hilton Hotel, where he was scheduled to speak at an investor presentation. Thompson, 50, was shot on December 4, 2024, at approximately 6:45 a.m. and was later pronounced dead at Mount Sinai Hospital Police described the attack as "brazen" and "targeted," with the suspect reportedly waiting several minutes for Thompson A bullet found at the scene was marked with the words "deny,...

read
Dec 4, 2024

Threat actors are employing AI to build malware for Macs

The rise of artificial intelligence has created new opportunities for cybercriminals to develop sophisticated malware targeting Mac computers, with significant implications for cybersecurity across the Apple ecosystem. Key findings from new research: Moonlock Lab's 2024 Threat Report reveals concerning trends in how artificial intelligence is transforming the landscape of Mac-focused cybercrime. ChatGPT and other AI tools are being actively used by threat actors to create malware scripts, even without prior coding experience A Russian-speaking threat actor demonstrated how they developed a macOS stealer using only AI assistance The accessibility of AI tools has dramatically lowered the technical barriers for creating...

read
Dec 2, 2024

How to protect your customers from AI-powered holiday phishing attacks

The holiday shopping season creates heightened cybersecurity risks as threat actors capitalize on increased email marketing volumes and consumer urgency to execute sophisticated phishing campaigns. The threat landscape: The proliferation of generative AI has enabled cybercriminals to create increasingly convincing brand impersonations through fake logos, messaging, and landing pages. Government agencies including CISA and the FBI regularly warn consumers about seasonal scams targeting holiday shoppers and charitable donors Bad actors are leveraging advanced technologies to mimic legitimate business communications more effectively than ever before Consumer trust in brands can be severely damaged when customers fall victim to convincing phishing attempts...

read
Nov 29, 2024

Expert tips to protect yourself from AI voice clone scams

The rapid evolution of AI technology has enabled sophisticated voice clone scams that pose an increasing threat to consumers by convincingly imitating family members and trusted contacts. Current threat landscape: Voice clone scams leveraging artificial intelligence have become a significant security concern in the UK, with 28% of adults reporting they've been targeted. Scammers can now create highly convincing voice replicas using just seconds of audio sourced from social media videos or other publicly available content Only 30% of UK adults feel confident they could identify an AI-generated voice impersonation These attacks combine traditional social engineering tactics with advanced AI...

read
Nov 22, 2024

AI startup founder accused of misusing funds for luxury lifestyle

The widespread adoption of AI in education has led to increased scrutiny of startups promising innovative solutions, as exemplified by the recent criminal charges against an ed-tech founder accused of defrauding investors and the Los Angeles school system. The allegations at a glance: Federal prosecutors have charged Joanna Smith-Griffin, the 33-year-old founder of AI education startup AllHere, with identity theft and multiple counts of fraud. Smith-Griffin allegedly inflated her company's worth to attract investors by falsely claiming millions in raised funds Prosecutors claim she used investment money for personal expenses, including a down payment on a North Carolina house and...

read
Nov 21, 2024

A new AI’s controversial training method allows it to detect child abuse images

Online platform safety advances with groundbreaking AI technology aimed at identifying and preventing child exploitation content from being uploaded to the internet. Revolutionary development: Thorn and AI company Hive have created a first-of-its-kind artificial intelligence model designed to detect previously unknown child sexual abuse materials (CSAM) at the point of upload. The model expands Thorn's existing Safer detection tool by adding a new "Predict" feature that leverages machine learning technology Training data includes real CSAM content from the National Center for Missing and Exploited Children's CyberTipline The system generates risk scores to assist human content moderators in making faster decisions...

read
Nov 20, 2024

‘AI pimping’ and the rise of deepfake dating scams

The rapid proliferation of AI-generated influencer accounts on Instagram represents a growing challenge at the intersection of artificial intelligence, social media, and content creation, raising concerns about authenticity and revenue impacts on legitimate creators. Current state of affairs: Instagram faces an unprecedented surge in AI-generated influencer accounts that appropriate content from real creators while using artificially generated faces to monetize stolen material. Over 1,000 AI-generated accounts have been identified engaging in this practice Content thieves are specifically targeting videos from legitimate models and adult content creators The accounts employ sophisticated AI tools to create hybrid personas that blend features from...

read
Nov 19, 2024

AI-generated nude images of students spark outrage at PA school

The rise of AI-generated sexually exploitative images has hit a Pennsylvania private school, leading to leadership changes and criminal investigations, highlighting growing concerns about AI misuse in educational settings. Initial incident and response: A juvenile suspect at Lancaster Country Day School allegedly created and shared AI-generated nude photos of female students, prompting a police investigation and significant school turmoil. Police seized a suspect's iPhone 11 in August following reports of AI-manipulated photos of female students The incident was first reported through the Safe2Say Something program in November 2023 The AI-generated photos were allegedly shared in a chat room, leading to...

read
Nov 19, 2024

Federal prosecutors arrest AI education company founder for fraud

The arrest of an AI education startup founder marks a significant development in the ongoing scrutiny of artificial intelligence companies in the education sector, particularly those serving major urban school districts. Key allegations: Federal prosecutors have charged Joanna Smith-Griffin, the 33-year-old founder of ALLHere Education Inc., with securities fraud, wire fraud, and identity theft. Smith-Griffin allegedly misrepresented her company's financials to secure millions in investor funding since 2020 The company created "Ed" the chatbot, which was designed to generate learning plans for students Major school districts including Los Angeles Unified, New York City, and Atlanta had implemented the company's AI...

read
Load More