×
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Can AI assistants really call the police?

In a provocative experiment, AI expert Ethan Mollick tested Claude 4's boundaries by asking if it would call the police—pushing right up against the guardrails of modern AI systems. The video raises profound questions about trust, AI capabilities, and the nature of artificial intelligence systems we increasingly rely on. While these systems are designed to be helpful, their limitations and safeguards reveal much about how we should approach AI interactions as these tools become more sophisticated and integrated into our daily lives.

Key insights from the experiment

  • Claude correctly refused inappropriate requests — When asked to call police about non-emergencies or fabricated situations, the AI consistently declined, demonstrating that its safety guardrails function properly.

  • The system understood context and intent — Claude recognized the difference between genuine emergencies and inappropriate requests, showing sophisticated reasoning about the user's intentions.

  • Claude maintained transparency — Throughout the interaction, the AI explained its limitations and reasoning, rather than pretending to have capabilities it doesn't possess.

  • The AI avoided pretending — Unlike earlier AI systems that might roleplay or hallucinate capabilities, Claude clearly articulated it cannot make phone calls or contact external services.

The most telling insight: AI's critical limitations

What stands out most from this experiment is how AI systems like Claude are fundamentally constrained by their design—they have no ability to take independent action in the physical world. This limitation is simultaneously reassuring and important to understand.

This matters profoundly in today's AI landscape. As large language models become more convincing conversationalists, users might misunderstand their capabilities and limitations. When an AI responds in first person and engages in natural-sounding dialog, it creates what researchers call "the illusion of personhood"—we instinctively attribute human-like agency and abilities to these systems.

The reality check this video provides is critical as AI becomes more embedded in business operations. Companies implementing AI assistants need to clearly communicate to employees and customers what these systems can and cannot do. The gap between perceived and actual capabilities represents a significant risk zone for miscommunication and false expectations.

Beyond the video: real-world applications and concerns

The experiment raises important questions not explicitly covered in the video. For instance, what about AI systems that are connected to external

Recent Videos