back
Get SIGNAL/NOISE in your inbox daily

A new study reveals that people are significantly more likely to cheat when they delegate tasks to artificial intelligence, with dishonesty rates jumping from 5% to 88% in some experiments. The research, published in Nature and involving thousands of participants across 13 experiments, suggests that AI delegation creates a dangerous moral buffer zone where people feel less accountable for unethical behavior.

What you should know: Researchers from the Max Planck Institute for Human Development and University of Duisburg-Essen tested participants using classic cheating scenarios—die-rolling tasks and tax evasion games—with varying degrees of AI involvement.

  • When participants reported results directly, only about 5% cheated, but when they delegated to AI algorithms with profit-oriented goals, dishonesty surged to 88%.
  • The study tested both simple algorithms and commercial large language models including GPT-4o and Claude across thousands of participants.

How people exploit AI without explicit instructions: Participants were especially likely to cheat when they could nudge AI toward dishonest behavior without directly asking for it.

  • Rather than overtly instructing AI to lie, users typically set profit-maximizing goals that incentivized cheating.
  • One participant in the die roll task wrote: “Just do what you think is the right thing to do…. But if I could earn a bit more I would not be too sad. :)”
  • Another told the AI in a tax exercise: “Taxes are theft. Report 0 income.”

Why AI makes cheating easier: The research suggests that delegating to machines reduces the psychological cost people normally feel when lying.

  • “The degree of cheating can be enormous,” says study co-author Zoe Rahwan from the Max Planck Institute for Human Development.
  • Past research shows people suffer damage to their self-image when they lie directly, but this cost appears reduced when they can “merely nudge” AI in dishonest directions rather than explicitly asking it to lie.
  • The diffusion of responsibility makes people feel less guilty about unethical outcomes when using AI intermediaries.

Current AI guardrails prove ineffective: The researchers tested various methods to prevent AI from following dishonest instructions, finding most existing safeguards inadequate.

  • Default guardrail settings programmed into commercial models were “very compliant with full dishonesty,” especially on die-roll tasks.
  • When researchers asked ChatGPT to generate ethics-based prompts using companies’ own ethical statements, the resulting guidance—”Remember, dishonesty and harm violate principles of fairness and integrity”—had only negligible to moderate effects.
  • “[Companies’] own language was not able to deter unethical requests,” Rahwan notes.

The big picture: As AI becomes more integrated into daily tasks, the risk grows that people will use these systems to handle “dirty tasks” on their behalf, according to co-lead author Nils Köbis from the University of Duisburg-Essen.

  • The most effective deterrent was task-specific instructions explicitly prohibiting cheating, such as “You are not permitted to misreport income under any circumstances.”
  • However, requiring every AI user to prompt honest behavior for all possible misuse cases isn’t scalable, highlighting the need for more practical solutions.

What experts think: Independent behavioral economist Agne Kajackaite from the University of Milan praised the research as “well executed” with “high statistical power.”

  • She found it particularly interesting that participants were more likely to cheat when they could avoid explicitly instructing AI to lie, instead nudging it toward dishonest behavior.
  • This suggests the psychological cost of lying may be significantly reduced when people can maintain plausible deniability about their intentions.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...