back
Get SIGNAL/NOISE in your inbox daily

The Rise of Controversial AI-Powered Crime-Fighting Tool: Cybercheck, an artificial intelligence system developed by Canadian company Global Intelligence, claims to geolocate individuals in real-time or the past using only open-source data and algorithms, raising significant concerns about its accuracy and ethical implications.

Widespread adoption and bold claims: Cybercheck has gained traction among law enforcement agencies, with over 345 departments in the United States utilizing the tool for approximately 24,000 searches since 2017.

  • Adam Mosher, the founder of Global Intelligence, asserts that Cybercheck operates as a fully automated system requiring no human intervention.
  • The company markets Cybercheck as a powerful investigative tool capable of providing precise location data based solely on publicly available information.

Accuracy concerns and unverifiable evidence: A WIRED investigation has uncovered numerous instances where Cybercheck’s evidence was demonstrably incorrect or impossible to verify, casting doubt on the system’s reliability.

  • Multiple murder cases in Ohio saw prosecutors ultimately deciding against using Cybercheck reports as evidence after defense attorneys scrutinized the data.
  • Open-source intelligence experts have expressed skepticism about Cybercheck’s claims, stating that much of the information it purportedly provides would be impossible to obtain using only public data sources.

Lack of transparency and accountability: Cybercheck’s operational methods and data sources remain shrouded in secrecy, raising significant concerns about the tool’s reliability and potential for misuse.

  • The system does not retain supporting evidence for its findings, making it difficult to verify the accuracy of its reports or challenge its conclusions.
  • This lack of transparency has led to questions about the tool’s compliance with legal and ethical standards for evidence gathering and presentation in criminal cases.

Mixed results and growing skepticism: Law enforcement agencies report varying experiences with Cybercheck, highlighting the inconsistent nature of the tool’s performance.

  • Some departments have found Cybercheck to be a helpful investigative aid, providing leads or corroborating existing information.
  • Other agencies report instances where Cybercheck provided false or misleading information, potentially jeopardizing investigations or leading to wrongful accusations.

Legal and ethical implications: The use of Cybercheck in criminal investigations raises important questions about due process, privacy rights, and the admissibility of AI-generated evidence in court.

  • Defense attorneys have successfully challenged Cybercheck reports in several cases, leading to the exclusion of this evidence from trial proceedings.
  • The tool’s lack of transparency makes it difficult for defendants to effectively cross-examine or challenge the evidence presented against them.

Broader context of AI in law enforcement: Cybercheck’s controversies highlight the growing debate surrounding the use of artificial intelligence and algorithmic decision-making in the criminal justice system.

  • As more law enforcement agencies adopt AI-powered tools, concerns about accuracy, bias, and accountability continue to mount.
  • The Cybercheck case underscores the need for robust oversight, transparency, and validation processes for AI systems used in high-stakes contexts like criminal investigations.

Analyzing deeper: The future of AI in policing: The Cybercheck controversy serves as a cautionary tale about the potential pitfalls of relying too heavily on opaque AI systems in law enforcement.

  • As AI technology continues to advance, it is crucial for policymakers, law enforcement agencies, and technology companies to work together to establish clear guidelines and standards for the development and deployment of AI-powered investigative tools.
  • Ensuring transparency, accountability, and the protection of individual rights must be paramount as the criminal justice system navigates the integration of artificial intelligence into its practices.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...