×
Singapore researchers create “ambient agents” framework to control agentic AI with 90% safety improvement
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Singapore Management University researchers have created a framework that significantly improves AI agent safety and reliability, addressing a critical obstacle to enterprise automation. Their approach, AgentSpec, provides a structured way to control agent behavior by defining specific rules and constraints—preventing unwanted actions while maintaining agent functionality.

The big picture: AgentSpec tackles the fundamental challenge that has limited AI agent adoption in enterprises—their tendency to take unintended actions and difficulty in controlling their behavior.

  • The framework acts as a runtime enforcement layer that intercepts agent behavior and applies safety rules set by humans or generated through prompts.
  • Tests show AgentSpec prevented over 90% of unsafe code executions and eliminated hazardous actions in various scenarios while adding minimal processing overhead.

How it works: AgentSpec uses a domain-specific framework that lets users define structured rules with triggers, predicates, and enforcement mechanisms that govern agent behavior.

  • The system intercepts agent actions at three key decision points: before an action executes, after an action produces an observation, and when the agent completes its task.
  • Users define safety rules through three components: the trigger (when to activate the rule), conditions to check, and enforcement actions to take if rules are violated.

Technical integration: While initially tested with LangChain frameworks, AgentSpec was designed to be framework-agnostic and compatible with multiple AI ecosystems.

  • The researchers demonstrated its effectiveness across various agent platforms, including AutoGen and Apollo.
  • LLM-generated AgentSpec rules using OpenAI‘s o1 model enforced 87% of risky code and prevented law-breaking in the majority of tested scenarios.

Why this matters: As organizations develop their agentic strategy, ensuring reliability is crucial for enterprise adoption of autonomous AI systems.

  • The vision of “ambient agents” continuously running in the background to proactively complete tasks requires safeguards that prevent them from introducing non-safe actions.
  • AgentSpec provides a practical approach to enabling more advanced automation while maintaining appropriate safety constraints.
New approach to agent reliability, AgentSpec, forces agents to follow rules

Recent News

“Learn to AI”: California propels workforce training with tech giants across public education system

The partnerships target California's massive public education infrastructure to address growing AI workforce demand.

Qualcomm plans AI server chips for 2028 amid competitive challenges

A four-year wait for data center revenue while rivals cement their positions.

LangChain launches Open SWE, an AI agent for autonomous coding tasks

Works like an additional team member, handling complex projects autonomously while juggling multiple tasks.