back
Get SIGNAL/NOISE in your inbox daily

DroneDeploy has launched Safety AI, a generative AI tool that analyzes daily construction site imagery to identify OSHA safety violations with claimed 95% accuracy. The technology represents a significant advancement over traditional object detection methods, using visual language models to “reason” about safety conditions rather than simply recognizing objects like ladders or hard hats.

Why this matters: Construction remains the most dangerous industry for fatal workplace accidents, with over 1,000 workers dying annually in the US from slips, trips, and falls—highlighting the urgent need for better safety monitoring solutions.

How it works: Safety AI uses visual language models (VLMs) to analyze reality capture imagery from construction sites and flag potential OSHA violations.

  • The system employs a “golden data set” of tens of thousands of OSHA violation images gathered over years from DroneDeploy customers.
  • Rather than simple object detection, the AI can reason through complex scenarios by asking multiple strategic questions about each scene.
  • For ladder safety alone—responsible for 24% of construction fall deaths—the system uses “over a dozen layers of questioning” to determine if usage is safe.

In plain English: Think of traditional AI as being able to spot a ladder and a person, but not understanding if they’re using it safely. Visual language models work more like an experienced safety inspector, asking questions like “Does the person have three points of contact?” and “Are they standing on the top rung?” to reach a safety conclusion.

The technical advantage: Philip Lorenzo, who developed the tool at DroneDeploy, claims Safety AI is the first construction safety tool to use generative AI for violation detection.

  • Traditional machine learning can identify objects but struggles with complex reasoning like determining if “a person is standing on the top step” of a ladder.
  • VLMs can combine answers to multiple questions to reach safety conclusions about proper technique and contact points.
  • The system also incorporates older methods like photogrammetry and image segmentation to address spatial reasoning limitations.

Current deployment: Safety AI launched in October 2024 and is now deployed on hundreds of US construction sites, with versions adapted for building regulations in Canada, the UK, South Korea, and Australia.

The limitations: Even supporters acknowledge significant challenges with the 95% accuracy rate and edge cases.

  • Chen Feng from NYU’s AI4CE lab notes VLMs still struggle with 3D scene interpretation, spatial relationships, and visual “common sense.”
  • Lorenzo admits “some major flaws” exist with LLMs, particularly around spatial reasoning.
  • The remaining 5% error rate represents a critical gap that requires human oversight.

What experts are saying: Industry professionals see promise but emphasize the need for human verification.

  • “I think AI and drones for spotting safety problems that would otherwise kill workers is super smart. So long as it’s verified by a person,” said Ryan Calo, a robotics and AI law specialist at the University of Washington.
  • Aaron Tan, a concrete project manager, noted the tool could help overextended safety managers who often oversee 15 sites simultaneously.

Competitive landscape: Other companies are taking different approaches to AI-powered construction safety.

  • Jerusalem-based Safeguard AI uses traditional machine learning, with CEO Izhak Paz arguing “old computer vision” remains “more reliable” than VLMs.
  • Tel Aviv’s Buildots focuses on progress tracking rather than safety, requiring 99% accuracy and avoiding any hallucinations.
  • Both competitors use the established method of human-labeled training data rather than generative AI.

Worker concerns: The technology faces potential resistance from construction workers who worry about surveillance overreach.

  • “At my last company, we implemented cameras [as] a security system. And the guys didn’t like that. They were like, ‘Oh, Big Brother. You guys are always watching me—I have no privacy,'” Tan explained.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...