AI-powered subway scanners fall short in New York City pilot: A recent trial of artificial intelligence-driven weapons detection technology in New York City’s subway system yielded disappointing results, raising questions about the efficacy and feasibility of such security measures in mass transit.
Key findings of the pilot program: The 30-day test of AI-powered scanners across 20 subway stations revealed significant limitations in the technology’s ability to accurately detect firearms.
- The scanners performed 2,749 scans but failed to detect any firearms during the trial period.
- A concerning 118 false positives were recorded, resulting in a 4.29% false alarm rate.
- The system did identify 12 knives, though it remains unclear whether these were illegal weapons or permitted tools like pocket knives.
Context and motivation: Mayor Eric Adams initiated the pilot program as part of broader efforts to enhance safety and deter violence in the city’s subway system.
- The portable scanners, manufactured by Evolv, were introduced with the aim of providing a non-intrusive security measure for the bustling transit network.
- This initiative came in response to growing concerns about subway safety and a series of high-profile violent incidents in recent years.
Civil liberties concerns: The implementation of AI-powered scanning technology in public transit has sparked debate among civil rights advocates and privacy experts.
- Critics argue that scanning millions of subway riders is neither practical nor constitutionally sound.
- The program raises questions about the balance between public safety measures and individual privacy rights in urban environments.
Limited transparency: The New York Police Department’s release of data from the pilot program left several crucial questions unanswered.
- The NYPD did not disclose information on screening times, staffing requirements, or the number of riders who refused to be searched.
- This lack of comprehensive data has made it challenging for independent experts to fully assess the program’s efficiency and impact on passenger flow.
Manufacturer’s credibility issues: Evolv, the company behind the scanning technology, faces legal challenges that cast doubt on the reliability of its products.
- The firm is currently under federal investigation regarding its marketing practices.
- A class-action lawsuit accuses Evolv of exaggerating the capabilities of its devices, further complicating the evaluation of the technology’s potential.
Expert reactions: The pilot program’s results have elicited strong responses from legal and civil rights organizations.
- The Legal Aid Society condemned the program as “objectively a failure” based on the high false positive rate and lack of firearm detections.
- Advocates are calling for the program to be discontinued, arguing that the results do not justify its continuation or expansion.
Implications for urban security: The underwhelming performance of the AI-powered scanners in New York City’s subway system highlights the challenges of implementing advanced security technologies in complex urban environments.
- The trial’s outcome may influence decisions on similar security measures in other major cities and transit systems.
- It underscores the need for thorough testing and evaluation of AI-driven security solutions before wide-scale deployment in public spaces.
Balancing innovation and practicality: The pilot program’s shortcomings emphasize the importance of critically assessing new technologies in real-world conditions.
- While AI and machine learning offer promising advancements in security, their application in high-traffic public areas presents unique challenges.
- The experience in New York City serves as a cautionary tale for other municipalities considering similar technology-driven security measures.
Looking ahead: The future of AI-powered security in New York’s subway system remains uncertain following the pilot’s disappointing results.
- City officials and the NYPD will need to reevaluate their approach to enhancing subway safety in light of this experience.
- The outcome may spur investment in alternative security measures or improvements to existing AI technologies to address the identified shortcomings.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...