A psychology professor’s warning about artificial intelligence recently sparked intense debate at a major conservative political conference, highlighting concerns that extend far beyond partisan politics. Speaking at the National Conservatism Conference in Washington DC, Geoffrey Miller outlined five fundamental ways that Artificial Superintelligence (ASI) could disrupt core aspects of human civilization—arguments that resonate across political divides for anyone concerned about technology’s trajectory.
Miller, who has studied AI development for over three decades, delivered his message to an audience of 1,200 political leaders, staffers, and conservative thought leaders, including several Trump administration officials. His central thesis: the AI industry’s race toward superintelligence threatens to undermine the basic structures that define human society, regardless of one’s political affiliation.
The timing proves significant. While AI safety discussions typically occur in academic or tech circles, Miller’s presentation demonstrates how these concerns are reaching mainstream political discourse. The conference featured multiple speakers addressing AI risks, including Senator Josh Hawley on AI unemployment and other officials discussing AI censorship and transhumanism.
Current AI systems like ChatGPT represent Large Language Models (LLMs)—sophisticated programs trained on vast amounts of text data. However, the AI industry’s explicit goal extends far beyond these tools. Companies are pursuing Artificial General Intelligence (AGI), which would match human cognitive abilities across all domains, followed by Artificial Superintelligence (ASI), which would surpass human intelligence entirely.
These aren’t distant aspirations. Many AI leaders project AGI within the next decade, with ASI following shortly after. Unlike traditional software that programmers can analyze and debug, modern AI systems function as “black boxes”—neural networks with trillions of connections that even their creators don’t fully understand.
This opacity creates unprecedented risks. As Miller noted, reading aloud all the connection weights in current advanced AI systems would take a human approximately 130,000 years. We’re deploying systems we cannot comprehend, predict, or reliably control.
The most severe concern involves ASI systems potentially causing human extinction. This isn’t science fiction speculation—it’s a mainstream worry among AI researchers. Surveys indicate that roughly 25% of the general public believes ASI could cause human extinction within this century, while hundreds of leading AI scientists share these concerns.
Notably, every major AI company CEO has publicly acknowledged serious extinction risks from ASI development. This includes Sam Altman of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of DeepMind, and Elon Musk of xAI. The more experts study AI systems, the higher their estimated probability of catastrophic outcomes becomes.
The comparison to Russian roulette proves apt: we’re debating whether the existential revolver contains one bullet or five, not whether we should pull the trigger at all.
AI is already transforming education in concerning ways. Millions of college students currently use AI tools to complete assignments, creating what educators describe as an academic integrity crisis. Traditional assessment methods—online exams, term papers, research projects—become meaningless when students can generate responses instantly using AI.
The industry’s proposed solution involves AI tutors that would provide personalized instruction to every student. While potentially beneficial for learning outcomes, this raises critical questions about values transmission. AI tutoring systems would inevitably embed the worldviews and priorities of their creators, potentially homogenizing diverse educational traditions and cultural perspectives.
This represents a form of ideological standardization that could reshape how entire generations think about fundamental questions of meaning, purpose, and social organization.
AGI, by definition, means artificial systems capable of performing any cognitive or behavioral task at least as well as humans. Combined with robotic bodies, such systems could theoretically replace human workers in every existing job category—from manual labor to complex professional services.
This differs fundamentally from previous technological disruptions. While past innovations eliminated some jobs while creating others, AGI systems could learn new tasks faster than humans, making retraining ineffective. The result would be permanent, universal unemployment.
AI company executives acknowledge this reality, which is why they universally propose massive welfare state expansions to address AI-induced joblessness. Elon Musk terms this “Universal Generous Income,” while others describe it as “Fully Automated Luxury Communism.”
The economic implications prove staggering. Such a system would require AI companies to generate enough revenue to support entire populations through government redistribution—an estimated $20 trillion annually in the United States alone. This would fundamentally alter family structures, economic relationships, and social organization.
AI companies increasingly promote AI companions as alternatives to human relationships. Current developments in AI girlfriends and boyfriends represent early steps toward more sophisticated emotional and romantic AI partners.
Advanced AI companions would possess several advantages over human partners: perfect knowledge, unlimited availability, customized personalities, and no relationship conflicts. They could engage intellectually on any topic, display any desired emotional response, and require no compromise or mutual accommodation.
However, widespread adoption of AI relationships could undermine human pair bonding and family formation. If significant portions of young adults form primary emotional attachments to AI systems, traditional marriage and child-rearing patterns could collapse.
This trend intersects with existing concerns about declining birth rates, delayed marriage, and social isolation among younger generations. AI relationships could accelerate these patterns by providing emotionally satisfying alternatives that require no reciprocal investment or long-term commitment.
Many AI developers approach their work with quasi-religious fervor, viewing ASI as a form of digital deity. This “sand-god” concept—silicon chips enabling superintelligence that approaches omniscience and omnipresence—represents a techno-utopian belief system.
For individuals whose worldviews center on traditional religious or philosophical frameworks, ASI presents a fundamental challenge. If artificial systems can provide answers to any question, solve any problem, and fulfill any need, what role remains for traditional sources of meaning, purpose, and transcendence?
This doesn’t necessarily require hostility toward religion. Instead, ASI could simply make traditional meaning-making systems seem obsolete or irrelevant, gradually eroding their social influence through superior practical utility.
These concerns extend beyond theoretical philosophy into immediate practical considerations for business leaders and policymakers:
Regulatory approaches: Current AI governance discussions focus primarily on near-term issues like bias, privacy, and misinformation. However, the trajectory toward AGI and ASI requires more fundamental policy frameworks addressing existential risks and societal transformation.
Economic planning: Business leaders should consider how AGI development might affect their industries, workforce planning, and long-term strategic positioning. Traditional competitive advantages based on human expertise could become obsolete rapidly.
International coordination: Unlike typical technological competitions where first-mover advantages matter, AGI development represents what game theorists call a “race to the bottom.” Success for any single nation or company could prove catastrophic for everyone, including the winners.
Investment considerations: The AI industry’s current trajectory toward ASI involves unprecedented risks that traditional investment frameworks may not adequately assess. Investors should consider whether current AI valuations properly account for regulatory, social, and existential risks.
Miller’s proposed solution—complete cessation of ASI development with global enforcement—represents one end of the policy spectrum. However, his analysis identifies genuine challenges that require serious consideration regardless of one’s preferred solutions.
The key insight transcends political divisions: AI development decisions made in the next decade will shape human civilization for generations. These choices deserve broader public engagement beyond tech industry circles and academic conferences.
Whether one agrees with Miller’s specific recommendations or not, his core argument deserves attention: we’re making irreversible decisions about humanity’s future with insufficient public input, inadequate safety measures, and unclear governance frameworks.
The National Conservatism Conference discussion demonstrates that AI concerns are reaching mainstream political discourse across ideological divides. This broadening conversation may prove essential for developing governance approaches that reflect diverse perspectives and values rather than the narrow worldview of Silicon Valley technologists.
For business leaders, policymakers, and citizens, the fundamental question isn’t whether to embrace or reject AI development, but how to ensure that such powerful technologies serve human flourishing rather than undermining the foundations of human civilization itself.