×
Singapore researchers create “ambient agents” framework to control agentic AI with 90% safety improvement
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Singapore Management University researchers have created a framework that significantly improves AI agent safety and reliability, addressing a critical obstacle to enterprise automation. Their approach, AgentSpec, provides a structured way to control agent behavior by defining specific rules and constraints—preventing unwanted actions while maintaining agent functionality.

The big picture: AgentSpec tackles the fundamental challenge that has limited AI agent adoption in enterprises—their tendency to take unintended actions and difficulty in controlling their behavior.

  • The framework acts as a runtime enforcement layer that intercepts agent behavior and applies safety rules set by humans or generated through prompts.
  • Tests show AgentSpec prevented over 90% of unsafe code executions and eliminated hazardous actions in various scenarios while adding minimal processing overhead.

How it works: AgentSpec uses a domain-specific framework that lets users define structured rules with triggers, predicates, and enforcement mechanisms that govern agent behavior.

  • The system intercepts agent actions at three key decision points: before an action executes, after an action produces an observation, and when the agent completes its task.
  • Users define safety rules through three components: the trigger (when to activate the rule), conditions to check, and enforcement actions to take if rules are violated.

Technical integration: While initially tested with LangChain frameworks, AgentSpec was designed to be framework-agnostic and compatible with multiple AI ecosystems.

  • The researchers demonstrated its effectiveness across various agent platforms, including AutoGen and Apollo.
  • LLM-generated AgentSpec rules using OpenAI‘s o1 model enforced 87% of risky code and prevented law-breaking in the majority of tested scenarios.

Why this matters: As organizations develop their agentic strategy, ensuring reliability is crucial for enterprise adoption of autonomous AI systems.

  • The vision of “ambient agents” continuously running in the background to proactively complete tasks requires safeguards that prevent them from introducing non-safe actions.
  • AgentSpec provides a practical approach to enabling more advanced automation while maintaining appropriate safety constraints.
New approach to agent reliability, AgentSpec, forces agents to follow rules

Recent News

AI voice scams target US officials at federal, state level to steal data

Scammers combine artificial intelligence voice cloning and text messages to extract sensitive data from government workers in a chain-like attack pattern against U.S. institutions.

Startup Raindrop launches observability platform to get handle on stealth AI errors

The startup offers specialized monitoring tools to detect when AI applications fail silently without standard error signals in production environments.

European fintech rebounds as VC funding recovers from 4-year slump

European fintech funding has reached €6.3 billion in 2024 already—over 70% of last year's total—as companies prioritize resilience in a more stable environment with normalized valuations and clearer regulatory frameworks.