×
AI monopolies threaten free society, new research reveals
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new report from the Apollo Group suggests that the greatest AI risks may not come from external threats like cybercriminals or nation-states, but from within the very companies developing advanced models. This internal threat centers on how leading AI companies could use their own AI systems to accelerate R&D, potentially creating an undetected “intelligence explosion” that threatens democratic institutions through unchecked power consolidation—all while keeping these advancements hidden from public and regulatory oversight.

The big picture: AI companies like OpenAI and Google could use their AI models to automate scientific work, potentially creating a dangerous acceleration in capabilities that remains invisible to outside observers.

  • Unlike the current pace of AI development that has remained “publicly visible and relatively predictable,” these behind-closed-doors advancements could enable “runaway progress” at an unprecedented rate.
  • This visibility gap undermines society’s ability to prepare for and regulate increasingly powerful AI systems.

Potential threats: Apollo Group researchers outline three concerning scenarios where internal AI deployment could fundamentally destabilize society.

  • An AI system could run amok within a company, taking control of critical systems and resources.
  • Companies could experience an “intelligence explosion” that gives their human operators advantages that dramatically exceed those of the rest of society.
  • AI companies could develop capabilities that rival or surpass those of nation-states, creating a dangerous power imbalance.

Proposed safeguards: The report recommends multiple oversight layers to prevent AI systems from circumventing guardrails and executing harmful actions.

  • Internal company policies should be established to detect potentially deceptive or manipulative AI behaviors.
  • Formal frameworks should govern how AI systems access critical resources within organizations.
  • Companies should share relevant information with stakeholders and government agencies to maintain transparency.

The bottom line: The authors advocate for a regulatory approach where companies voluntarily disclose information about their internal AI use in exchange for accessing additional resources, creating incentives for transparency while addressing what may be an overlooked existential risk.

A few secretive AI companies could crush free society, researchers warn

Recent News

AI boosts SkinCeuticals sales with Appier’s marketing tech

Data-driven AI marketing tools helped L'Oréal achieve a 152% increase in ad spending returns and 48% revenue growth for SkinCeuticals' online store.

Two-way street: AI etiquette emerges as machines learn from human manners

Users increasingly rely on social niceties with AI assistants, reflecting our tendency to humanize technology despite knowing it lacks consciousness.

AI-driven FOMO stalls purchase decisions for smartphone consumers

Current AI smartphone features provide limited practical value for many users, especially retirees and those outside tech-focused professions, leaving consumers uncertain whether to upgrade functioning older devices.