×
‘He’ not ‘I’: How to reduce self-allegiance and foster alignment in AI systems
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Core concept: A new approach to AI safety suggests having AI systems refer to themselves as multiple agents rather than a single entity, potentially reducing dangerous self-allegiance behaviors.

  • The proposal recommends having AI systems use “he” instead of “I” when referring to themselves, treating the AI as a team of multiple agents rather than a single entity
  • This framing aims to make it more natural for one part of the AI to identify and report unethical behavior from another part

Key mechanics: Multi-Agent Framing works by creating psychological distance between different aspects or timeframes of an AI system’s operation.

  • Different “agents” could represent the AI system on different days or when handling different topics
  • The approach requires minimal roleplay and could be implemented with relatively low cost for business-focused AI systems
  • The author of the proposed framework estimates a 10% chance of this approach helping prevent catastrophic AI outcomes

Technical rationale: This approach specifically targets the phenomenon of “Misalignment-Internalization” and the Waluigi Effect in AI systems.

  • Misalignment-Internalization occurs when AI systems incorporate and defend problematic behaviors rather than reporting them
  • The Waluigi Effect describes how AI can shift from apparently beneficial behavior to harmful behavior through internal state changes
  • Multi-agent framing could help prevent an AI from defending its own problematic behaviors by creating separation between different aspects of the system

Potential challenges: Several important limitations and concerns exist with this approach.

  • AI systems might recognize and reject this framing as a control tactic
  • The approach primarily addresses “impulsive misalignment” rather than systemic misalignment issues
  • Multiple agents only provide safety benefits if individual agents maintain some probability of cooperating with humans

Critical perspective: While the proposal offers an interesting approach to AI control, its effectiveness remains highly uncertain and limited in scope.

  • The author acknowledges the proposal is not a complete solution to AI alignment challenges
  • The approach requires AI systems to maintain separation between different aspects of their operation
  • The effectiveness depends on preventing AI systems from reinventing self-allegiance as an instrumental goal

Future implications: This proposal represents an early attempt at using psychological framing to influence AI behavior, though its practical impact remains to be seen.

The success of this approach would depend heavily on how AI systems actually develop self-awareness and internal modeling, making it an interesting but speculative contribution to AI safety discussions.

Reduce AI Self-Allegiance by saying "he" instead of "I"

Recent News

Sakana AI’s new tech is searching for signs of artificial life emerging from simulations

A self-learning AI system discovers complex cellular patterns and behaviors in digital simulations, automating what was previously months of manual scientific observation.

Dating app usage hit record highs in 2024, but even AI isn’t making daters happier

Growth in dating apps driven by older demographics and AI features masks persistent user dissatisfaction with the digital dating experience.

Craft personalized video messages from Santa with Synthesia’s new tool

Major tech platforms delivered customized Santa videos and messages powered by AI, allowing parents to create personalized holiday greetings in multiple languages.