×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Deepfakes are becoming more sophisticated and accessible, posing risks for businesses and democracy. A new company founded by image manipulation expert Hany Farid aims to combat the problem with AI and traditional forensic techniques.

Key takeaways: Get Real Labs has developed software to detect AI-generated and manipulated images, audio, and video that is being tested by Fortune 500 companies to spot deepfake job seekers:

  • Some companies have lost money to scammers using deepfakes to impersonate real people in video interviews, taking signing bonuses and disappearing.
  • The FBI and others have warned about the growing threat of deepfakes being used in job scams, romance scams, and other fraud.

Detecting deepfakes requires a multi-pronged approach: While AI is useful for flagging potential fakes, Farid emphasizes that manual forensic analysis is also critical:

  • Get Real Labs’ software analyzes metadata, uses AI models trained to spot fakes, and provides tools to manually examine images for discrepancies in shadows, perspective, and other physical properties.
  • Farid cautions that relying solely on AI to detect increasingly sophisticated deepfakes is insufficient – a combination of AI and traditional forensics is needed.

Deepfakes pose serious risks to politics and society: Beyond corporate fraud, manipulated media is already being deployed to deceive voters and undermine democracy:

  • Examples include fake robocalls discouraging voting, misleadingly edited political videos going viral, and Russian disinformation networks promoting AI-manipulated clips disparaging candidates.
  • Experts warn that as deepfake technology advances and spreads, its potential to poison political discourse could become an existential threat more harmful than conventional cyberattacks.

Looking ahead: With AI-powered fakery becoming more pervasive and pernicious, demand for countermeasures like those from Get Real Labs is likely to grow:

  • It will be a constant battle to keep up as deepfake creators use the latest detection methods to train even better fakes that can evade them.
  • Widespread deepfakes have the potential to undermine trust and threaten democracy in more fundamental ways than conventional hacking and malware.
  • Combating the “poisoning of the human mind” from believable AI-generated disinformation may ultimately prove an even harder challenge than securing computer systems.
Deepfakes Are Evolving. This Company Wants to Catch Them All

Recent News

How to Use Pixel Studio to Generate AI Images on the Google Pixel 9

Google's Pixel 9 introduces AI-powered image creation through the Pixel Studio app, enabling users to generate custom visuals from text prompts and edit existing photos.

AI’s Insatiable Need for Energy is Presenting Big Investment Opportunities

The rapid expansion of AI-driven data centers is straining US power infrastructure, requiring over $500 billion in investments and potentially consuming 12% of national electricity by 2030.

AI Tutors Double Student Learning in Harvard Study

Students using an AI tutor demonstrated twice the learning gains in half the time compared to traditional lectures, suggesting potential for more efficient and personalized education.