×
Defense attorneys challenge AI-powered crime-fighting tool
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The Rise of Controversial AI-Powered Crime-Fighting Tool: Cybercheck, an artificial intelligence system developed by Canadian company Global Intelligence, claims to geolocate individuals in real-time or the past using only open-source data and algorithms, raising significant concerns about its accuracy and ethical implications.

Widespread adoption and bold claims: Cybercheck has gained traction among law enforcement agencies, with over 345 departments in the United States utilizing the tool for approximately 24,000 searches since 2017.

  • Adam Mosher, the founder of Global Intelligence, asserts that Cybercheck operates as a fully automated system requiring no human intervention.
  • The company markets Cybercheck as a powerful investigative tool capable of providing precise location data based solely on publicly available information.

Accuracy concerns and unverifiable evidence: A WIRED investigation has uncovered numerous instances where Cybercheck’s evidence was demonstrably incorrect or impossible to verify, casting doubt on the system’s reliability.

  • Multiple murder cases in Ohio saw prosecutors ultimately deciding against using Cybercheck reports as evidence after defense attorneys scrutinized the data.
  • Open-source intelligence experts have expressed skepticism about Cybercheck’s claims, stating that much of the information it purportedly provides would be impossible to obtain using only public data sources.

Lack of transparency and accountability: Cybercheck’s operational methods and data sources remain shrouded in secrecy, raising significant concerns about the tool’s reliability and potential for misuse.

  • The system does not retain supporting evidence for its findings, making it difficult to verify the accuracy of its reports or challenge its conclusions.
  • This lack of transparency has led to questions about the tool’s compliance with legal and ethical standards for evidence gathering and presentation in criminal cases.

Mixed results and growing skepticism: Law enforcement agencies report varying experiences with Cybercheck, highlighting the inconsistent nature of the tool’s performance.

  • Some departments have found Cybercheck to be a helpful investigative aid, providing leads or corroborating existing information.
  • Other agencies report instances where Cybercheck provided false or misleading information, potentially jeopardizing investigations or leading to wrongful accusations.

Legal and ethical implications: The use of Cybercheck in criminal investigations raises important questions about due process, privacy rights, and the admissibility of AI-generated evidence in court.

  • Defense attorneys have successfully challenged Cybercheck reports in several cases, leading to the exclusion of this evidence from trial proceedings.
  • The tool’s lack of transparency makes it difficult for defendants to effectively cross-examine or challenge the evidence presented against them.

Broader context of AI in law enforcement: Cybercheck’s controversies highlight the growing debate surrounding the use of artificial intelligence and algorithmic decision-making in the criminal justice system.

  • As more law enforcement agencies adopt AI-powered tools, concerns about accuracy, bias, and accountability continue to mount.
  • The Cybercheck case underscores the need for robust oversight, transparency, and validation processes for AI systems used in high-stakes contexts like criminal investigations.

Analyzing deeper: The future of AI in policing: The Cybercheck controversy serves as a cautionary tale about the potential pitfalls of relying too heavily on opaque AI systems in law enforcement.

  • As AI technology continues to advance, it is crucial for policymakers, law enforcement agencies, and technology companies to work together to establish clear guidelines and standards for the development and deployment of AI-powered investigative tools.
  • Ensuring transparency, accountability, and the protection of individual rights must be paramount as the criminal justice system navigates the integration of artificial intelligence into its practices.
It Seemed Like an AI Crime-Fighting Super Tool. Then Defense Attorneys Started Asking Questions

Recent News

Databricks founder offers $1M to solve AI coding challenges

New competition offers $1 million prize for developing efficient, open-source AI coding models that can match human programmers' problem-solving capabilities.

ChatGPT is now on WhatsApp — here’s how to access it

OpenAI's latest WhatsApp integration brings basic AI assistance to billions of users in regions with limited internet access, running on a simplified version of GPT-4.

AI filmmakers can now find work on Runway’s new talent platform

As AI video tools become more sophisticated, production companies seek specialists who can blend creative vision with technical proficiency to deliver professional results.