News/Deepfakes

Jul 30, 2024

Ferrari Exec Targeted in Deepfake Scam

Deepfake technology used in attempt to scam Ferrari executive, highlighting growing trend of AI-powered impersonation attacks targeting businesses. While the attempt was ultimately unsuccessful, the incident underscores the increasing sophistication of these scams and the need for heightened vigilance. The Scam Attempt: A Ferrari executive received seemingly legitimate WhatsApp messages from someone impersonating CEO Benedetto Vigna, discussing a confidential acquisition and requesting the signing of an NDA: The impersonator used a convincing imitation of Vigna's southern Italian accent and had a profile picture of the CEO, adding to the scam's credibility. The fake Vigna claimed to be calling from a...

read
Jul 30, 2024

Deepfakes and Algorithms: How Bad Actors Weaponize AI to Manipulate Minds

The rise of artificial intelligence (AI) has brought about unprecedented opportunities, but also significant dangers as bad actors exploit the technology to manipulate people and undermine trust in the digital ecosystem. The dark side of AI: Bad actors, from cybercriminals to unethical corporations and rogue states, are weaponizing AI to craft sophisticated strategies that influence individuals and groups, often without their knowledge: Deepfakes, hyper-realistic video or audio recordings that make it appear as if someone is saying or doing something they never did, pose a significant threat to personal reputations and the integrity of information. AI-powered social media bots and...

read
Jul 30, 2024

Microsoft Urges Congress to Regulate AI Deepfakes

The race to regulate AI-generated deepfakes heats up as Microsoft urges Congress to take action against the potential threats posed by this rapidly advancing technology, which could have far-reaching implications for politics, privacy, and public trust. Microsoft's call to action: In a recent blog post, Microsoft vice chair and president Brad Smith stressed the urgent need for policymakers to address the risks associated with AI-generated deepfakes: Smith emphasized that existing laws must evolve to combat deepfake fraud, as the technology can be used by cybercriminals to steal from everyday Americans. Microsoft is advocating for a comprehensive "deepfake fraud statute" that...

read
Jul 29, 2024

Musk Defends Deepfake of VP Harris as Parody

Elon Musk defended sharing a deepfake video of VP Kamala Harris, arguing it's a protected parody despite Gov. Newsom's vow to crack down on misleading political content. Musk shares deepfake video: Tesla CEO Elon Musk shared an AI-generated video of presidential candidate Kamala Harris, which features a synthetic voice mocking her campaign with statements she never actually made. The video was created by a YouTube user known as Mr. Reagan and was labeled as a parody when originally shared on Twitter/X. Musk, a prominent Trump supporter, re-shared the video without any disclaimer about its fake nature. Gov. Newsom condemns video,...

read
Jul 29, 2024

AI-Manipulated Kamala Harris Video Raises Concerns for 2024 Election

A manipulated video mimicking Vice President Kamala Harris' voice has raised concerns about the potential misuse of AI in politics as the 2024 presidential election approaches. The video, which convincingly impersonates Harris using many visuals from her real campaign ad, gained significant attention after Elon Musk shared it on his social media platform X without explicitly noting it was originally released as parody. Key details about the manipulated video: The AI-generated voice-over makes false claims about Harris, referring to her as a "diversity hire" and suggesting she doesn't know "the first thing about running the country": The video retains "Harris...

read
Jul 26, 2024

TV Doctors Deepfaked, Exploited to Peddle Health Scams on Social Media

TV doctors "deepfaked" to promote health scams on social media: An investigation by the British Medical Journal has revealed that several well-known TV doctors, including Michael Mosley and Hilary Jones, have been "deepfaked" on social media platforms to promote fraudulent health products and scams. "Deepfaking" involves using artificial intelligence to create convincing videos of individuals by mapping their digital likeness onto someone else's body, essentially making it appear as if they are promoting or endorsing products they have no actual connection to. The investigation found videos on platforms like Facebook and YouTube, with one example showing a deepfaked Dr. Hilary...

read
Jul 25, 2024

Meta’s Oversight Board Exposes Flaws in Instagram’s Deepfake Moderation

Meta's Oversight Board found that Instagram failed to promptly take down an explicit AI-generated deepfake of an Indian public figure, revealing flaws in the company's moderation practices. Key findings and implications: The Oversight Board's investigation reveals that Meta's approach to moderating non-consensual deepfakes is overly reliant on media reports, potentially leaving victims who are not public figures more vulnerable: Meta only removed the deepfake of the Indian woman and added it to its internal database after the board began investigating, while a similar deepfake of an American woman was quickly deleted. The board expressed concern that "many victims of deepfake...

read
Jul 25, 2024

Senate Passes Landmark Bill to Combat Nonconsensual Deepfake Porn

The DEFIANCE Act, a bipartisan bill to provide legal recourse to victims of non-consensual deepfake pornography, has unanimously passed the Senate and now heads to the House. Key legislative details: The DEFIANCE Act amends the Violence Against Women Act to allow victims to sue producers, distributors, or recipients of deepfake porn if they knew or recklessly disregarded the lack of consent: The bill provides a civil cause of action for both adults and minors, becoming the first federal law to do so if passed by the House. Recent amendments clarify the definition of "digital forgery," update available damages, and add...

read
Jul 25, 2024

AI Disrupts Reality TV: Casting Challenges, Deepfakes, and Digital Clones

The rise of AI is transforming the reality TV industry, impacting both the casting process and the creation of unauthorized content featuring reality stars. AI's impact on casting: Reality TV casting directors are facing challenges due to applicants using AI filters and editing tools on their social media photos, making it harder to assess their real appearance: Valerie Penso-Cuculich, a casting director for shows like Love Island USA and The Real Housewives of Dubai, says potential contestants are increasingly using AI to alter their appearance, resulting in over-filtered images that don't reflect reality. When applicants show up for Zoom auditions,...

read
Jul 24, 2024

Senate Passes DEFIANCE Act, Enabling Victims to Sue Deepfake Creators for Damages

The U.S. Senate passed the DEFIANCE Act, a bill that allows victims of nonconsensual intimate AI-generated images, or "deepfakes," to sue the creators for damages, marking a significant step in addressing the growing problem of AI-enabled sexual exploitation. Key provisions of the DEFIANCE Act: The bill enables victims of sexually explicit deepfakes to seek civil remedies against those who created or processed the images with the intent to distribute them: Identifiable victims can receive up to $150,000 in damages, which can be increased to $250,000 if the incident is connected to sexual assault, stalking, harassment, or if it directly caused...

read
Jul 24, 2024

ACLU Argues New Laws Regulating Deepfakes Infringe on Free Speech

The ACLU is fighting to protect free speech rights related to AI-generated content, arguing that some of the new laws regulating deepfakes and other AI outputs conflict with the First Amendment. This stance is leading to an uncomfortable reckoning for the movement to control AI. Key takeaways: AI itself has no rights, but people using AI to communicate have First Amendment protections. The ACLU contends that citizens have a constitutional right to use AI to spread untruths, just as they do with other forms of speech. Restricting who can listen to AI-generated speech would also infringe on the "right to...

read
Jul 19, 2024

Astronomers’ Galaxy-Studying Techniques Are Helping To Identify AI-Generated Deepfakes

The discovery that AI-generated deepfakes can be identified by analyzing the reflections in people's eyes, similar to how astronomers study galaxies, has significant implications for combating the spread of misinformation. Key findings: Researchers at the University of Hull have developed a method to detect AI-generated deepfakes by examining the consistency of light reflections in a person's eyeballs: In real images, the reflections in both eyeballs are generally consistent, while in deepfakes, the reflections often lack consistency between the left and right eyes. By employing techniques used in astronomy to quantify the reflections and check for consistency, the team found that...

read
Jul 18, 2024

AI Startup Tackles Deepfake Threat Ahead of US Elections

The AI startup ElevenLabs is partnering with a deepfake detection company to address concerns about the potential misuse of its voice cloning technology, particularly in the context of the upcoming US elections. Key details of the partnership: ElevenLabs is collaborating with Reality Defender, a US-based company specializing in deepfake detection for governments, officials, and enterprises: This partnership is part of ElevenLabs' efforts to enhance safety measures on its platform and prevent the misuse of its AI-powered voice cloning technology. The move comes after researchers raised concerns earlier this year about ElevenLabs' technology being used to create deepfake audio of US...

read
Jul 18, 2024

Scammers Steal Identities with Deepfakes: How to Spot AI-Generated Deception

AI-generated deepfakes pose a growing threat as scammers leverage advanced AI tools to deceive people, but there are ways to spot the telltale signs of manipulation. Key Takeaways: The increasing realism of AI-generated voice cloning and video manipulation makes it harder to distinguish deepfakes from authentic content, enabling scammers to misuse the likenesses of trusted figures to promote fraudulent products. Scammers targeting doctors: British TV doctors have had their identities stolen to sell dubious health products they do not actually endorse, with the deepfake videos quickly reappearing even after being reported and removed from social media platforms. Industry response and...

read
Jul 5, 2024

YouTube Cracks Down on Deepfakes, Allows Takedowns of Unauthorized AI-Generated Content

YouTube steps up efforts to combat AI deepfakes with new removal policy, allowing individuals to request takedowns of unauthorized AI-generated content depicting them. Key details of the updated policy: YouTube has implemented a new policy to address the rise of AI-generated content that mimics individuals without their consent: Affected individuals can now request the removal of AI-generated content that realistically depicts them through YouTube's privacy request process. To qualify for removal, the content must depict a realistic altered or synthetic version of the individual's likeness. Content creators have two days to remove the likeness or the entire video after a...

read
Jul 3, 2024

AI Trains on Kids’ Photos Without Consent, Enabling Realistic Deepfakes and Tracking

A Human Rights Watch investigation has revealed that photos of real children posted online are being used to train AI image generators without consent, posing significant privacy and safety risks. Key findings from Australia: HRW researcher Hye Jung Han discovered 190 photos of Australian children, including indigenous kids, linked in the LAION-5B AI dataset: The photos span entire childhoods, enabling AI to generate realistic deepfakes of these children. Dataset URLs sometimes reveal identifying information like names and locations, making it easy to track down the children. Even photos posted with strict privacy settings, such as unlisted YouTube videos, were scraped...

read
Jun 29, 2024

AI vs AI: The High-Stakes Battle to Detect Deepfakes and Defend Reality

Deepfakes are becoming more sophisticated and accessible, posing risks for businesses and democracy. A new company founded by image manipulation expert Hany Farid aims to combat the problem with AI and traditional forensic techniques. Key takeaways: Get Real Labs has developed software to detect AI-generated and manipulated images, audio, and video that is being tested by Fortune 500 companies to spot deepfake job seekers: Some companies have lost money to scammers using deepfakes to impersonate real people in video interviews, taking signing bonuses and disappearing. The FBI and others have warned about the growing threat of deepfakes being used in...

read
Jun 26, 2024

Deepfakes and Disinformation: AI Misuse Threatens Democracy, Study Reveals

A new study from Google's DeepMind division sheds light on the most common malicious uses of AI, revealing that political deepfakes and disinformation campaigns are the top concerns. Key Findings: Deepfakes and Disinformation Dominate AI Misuse; The study, conducted in collaboration with Google's Jigsaw unit, analyzed around 200 incidents of AI misuse and found that: The creation of realistic but fake images, videos, and audio of people, known as deepfakes, was the most prevalent form of AI misuse, nearly twice as common as the next highest category. The second most common misuse was the falsification of information using text-based tools...

read
Jun 26, 2024

Honor’s AI Innovations: Protecting Eyes, Detecting Deepfakes, and Redefining Human-AI Synergy

Honor unveils AI-powered eye protection technology: Honor has introduced its AI Defocus Eye Protection technology at MWC Shanghai, leveraging on-device AI models to alleviate common eye issues associated with prolonged screen use, such as myopia (nearsightedness): The technology works by using AI to simulate defocus glasses on the phone screen, originally designed to correct nearsightedness in children but also shown to enhance focus, prevent eye fatigue, and improve comfort for adults. According to Honor, their technique has already demonstrated a reduction in users' transient myopia by an average of 13 degrees after 25 minutes of reading, with some users experiencing...

read
Jun 25, 2024

Deepfakes Threaten Democracy: Google Study Reveals AI’s Role in Swaying Public Opinion

A new study from Google's DeepMind reveals that the most common misuse of AI is creating political deepfakes to sway public opinion, raising concerns about the impact on elections and the spread of misinformation. Key findings: The research, conducted in collaboration with Google's Jigsaw unit, analyzed around 200 incidents of AI misuse and found that: Creating realistic fake images, videos, and audio of politicians and celebrities was the most prevalent misuse, nearly twice as common as the next highest category. Shaping public opinion was the primary goal, accounting for 27% of misuse cases, followed by financial gain through services like...

read