×
Singapore researchers create “ambient agents” framework to control agentic AI with 90% safety improvement
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Singapore Management University researchers have created a framework that significantly improves AI agent safety and reliability, addressing a critical obstacle to enterprise automation. Their approach, AgentSpec, provides a structured way to control agent behavior by defining specific rules and constraints—preventing unwanted actions while maintaining agent functionality.

The big picture: AgentSpec tackles the fundamental challenge that has limited AI agent adoption in enterprises—their tendency to take unintended actions and difficulty in controlling their behavior.

  • The framework acts as a runtime enforcement layer that intercepts agent behavior and applies safety rules set by humans or generated through prompts.
  • Tests show AgentSpec prevented over 90% of unsafe code executions and eliminated hazardous actions in various scenarios while adding minimal processing overhead.

How it works: AgentSpec uses a domain-specific framework that lets users define structured rules with triggers, predicates, and enforcement mechanisms that govern agent behavior.

  • The system intercepts agent actions at three key decision points: before an action executes, after an action produces an observation, and when the agent completes its task.
  • Users define safety rules through three components: the trigger (when to activate the rule), conditions to check, and enforcement actions to take if rules are violated.

Technical integration: While initially tested with LangChain frameworks, AgentSpec was designed to be framework-agnostic and compatible with multiple AI ecosystems.

  • The researchers demonstrated its effectiveness across various agent platforms, including AutoGen and Apollo.
  • LLM-generated AgentSpec rules using OpenAI‘s o1 model enforced 87% of risky code and prevented law-breaking in the majority of tested scenarios.

Why this matters: As organizations develop their agentic strategy, ensuring reliability is crucial for enterprise adoption of autonomous AI systems.

  • The vision of “ambient agents” continuously running in the background to proactively complete tasks requires safeguards that prevent them from introducing non-safe actions.
  • AgentSpec provides a practical approach to enabling more advanced automation while maintaining appropriate safety constraints.
New approach to agent reliability, AgentSpec, forces agents to follow rules

Recent News

7 ways to optimize your business for ChatGPT recommendations

Companies must adapt their digital strategy with specific expertise, consistent information across platforms, and authoritative content to appear in AI-powered recommendation results.

Robin Williams’ daughter Zelda slams OpenAI’s Ghibli-style images amid artistic and ethical concerns

Robin Williams' daughter condemns OpenAI's AI-generated Ghibli-style images, highlighting both environmental costs and the contradiction with Miyazaki's well-documented opposition to artificial intelligence in creative work.

AI search tools provide wrong answers up to 60% of the time despite growing adoption

Independent testing reveals AI search tools frequently provide incorrect information, with error rates ranging from 37% to 94% across major platforms despite their growing popularity as Google alternatives.