A new report from the Apollo Group suggests that the greatest AI risks may not come from external threats like cybercriminals or nation-states, but from within the very companies developing advanced models. This internal threat centers on how leading AI companies could use their own AI systems to accelerate R&D, potentially creating an undetected “intelligence explosion” that threatens democratic institutions through unchecked power consolidation—all while keeping these advancements hidden from public and regulatory oversight.
The big picture: AI companies like OpenAI and Google could use their AI models to automate scientific work, potentially creating a dangerous acceleration in capabilities that remains invisible to outside observers.
Potential threats: Apollo Group researchers outline three concerning scenarios where internal AI deployment could fundamentally destabilize society.
Proposed safeguards: The report recommends multiple oversight layers to prevent AI systems from circumventing guardrails and executing harmful actions.
The bottom line: The authors advocate for a regulatory approach where companies voluntarily disclose information about their internal AI use in exchange for accessing additional resources, creating incentives for transparency while addressing what may be an overlooked existential risk.