×
Singapore researchers create “ambient agents” framework to control agentic AI with 90% safety improvement
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Singapore Management University researchers have created a framework that significantly improves AI agent safety and reliability, addressing a critical obstacle to enterprise automation. Their approach, AgentSpec, provides a structured way to control agent behavior by defining specific rules and constraints—preventing unwanted actions while maintaining agent functionality.

The big picture: AgentSpec tackles the fundamental challenge that has limited AI agent adoption in enterprises—their tendency to take unintended actions and difficulty in controlling their behavior.

  • The framework acts as a runtime enforcement layer that intercepts agent behavior and applies safety rules set by humans or generated through prompts.
  • Tests show AgentSpec prevented over 90% of unsafe code executions and eliminated hazardous actions in various scenarios while adding minimal processing overhead.

How it works: AgentSpec uses a domain-specific framework that lets users define structured rules with triggers, predicates, and enforcement mechanisms that govern agent behavior.

  • The system intercepts agent actions at three key decision points: before an action executes, after an action produces an observation, and when the agent completes its task.
  • Users define safety rules through three components: the trigger (when to activate the rule), conditions to check, and enforcement actions to take if rules are violated.

Technical integration: While initially tested with LangChain frameworks, AgentSpec was designed to be framework-agnostic and compatible with multiple AI ecosystems.

  • The researchers demonstrated its effectiveness across various agent platforms, including AutoGen and Apollo.
  • LLM-generated AgentSpec rules using OpenAI‘s o1 model enforced 87% of risky code and prevented law-breaking in the majority of tested scenarios.

Why this matters: As organizations develop their agentic strategy, ensuring reliability is crucial for enterprise adoption of autonomous AI systems.

  • The vision of “ambient agents” continuously running in the background to proactively complete tasks requires safeguards that prevent them from introducing non-safe actions.
  • AgentSpec provides a practical approach to enabling more advanced automation while maintaining appropriate safety constraints.
New approach to agent reliability, AgentSpec, forces agents to follow rules

Recent News

Cognitive offloading and the decline of critical thinking in the AI era

Research links AI-assisted "cognitive offloading" to declining IQ scores and critical thinking skills in developed countries.

US retreats from disinformation defense just as AI-powered deception grows

Federal research cuts come as AI makes deceptive content more sophisticated and tech companies reduce their content moderation efforts.

False legal citations expose the risks of generative AI in law

MyPillow CEO's legal team faces potential sanctions after AI-generated brief included numerous fictional court cases that were never verified.