Deepfakes are becoming more sophisticated and accessible, posing risks for businesses and democracy. A new company founded by image manipulation expert Hany Farid aims to combat the problem with AI and traditional forensic techniques.
Key takeaways: Get Real Labs has developed software to detect AI-generated and manipulated images, audio, and video that is being tested by Fortune 500 companies to spot deepfake job seekers:
- Some companies have lost money to scammers using deepfakes to impersonate real people in video interviews, taking signing bonuses and disappearing.
- The FBI and others have warned about the growing threat of deepfakes being used in job scams, romance scams, and other fraud.
Detecting deepfakes requires a multi-pronged approach: While AI is useful for flagging potential fakes, Farid emphasizes that manual forensic analysis is also critical:
- Get Real Labs’ software analyzes metadata, uses AI models trained to spot fakes, and provides tools to manually examine images for discrepancies in shadows, perspective, and other physical properties.
- Farid cautions that relying solely on AI to detect increasingly sophisticated deepfakes is insufficient – a combination of AI and traditional forensics is needed.
Deepfakes pose serious risks to politics and society: Beyond corporate fraud, manipulated media is already being deployed to deceive voters and undermine democracy:
- Examples include fake robocalls discouraging voting, misleadingly edited political videos going viral, and Russian disinformation networks promoting AI-manipulated clips disparaging candidates.
- Experts warn that as deepfake technology advances and spreads, its potential to poison political discourse could become an existential threat more harmful than conventional cyberattacks.
Looking ahead: With AI-powered fakery becoming more pervasive and pernicious, demand for countermeasures like those from Get Real Labs is likely to grow:
- It will be a constant battle to keep up as deepfake creators use the latest detection methods to train even better fakes that can evade them.
- Widespread deepfakes have the potential to undermine trust and threaten democracy in more fundamental ways than conventional hacking and malware.
- Combating the “poisoning of the human mind” from believable AI-generated disinformation may ultimately prove an even harder challenge than securing computer systems.
Deepfakes Are Evolving. This Company Wants to Catch Them All