back
Get SIGNAL/NOISE in your inbox daily

A psychology professor’s warning about artificial intelligence recently sparked intense debate at a major conservative political conference, highlighting concerns that extend far beyond partisan politics. Speaking at the National Conservatism Conference in Washington DC, Geoffrey Miller outlined five fundamental ways that Artificial Superintelligence (ASI) could disrupt core aspects of human civilization—arguments that resonate across political divides for anyone concerned about technology’s trajectory.

Miller, who has studied AI development for over three decades, delivered his message to an audience of 1,200 political leaders, staffers, and conservative thought leaders, including several Trump administration officials. His central thesis: the AI industry’s race toward superintelligence threatens to undermine the basic structures that define human society, regardless of one’s political affiliation.

The timing proves significant. While AI safety discussions typically occur in academic or tech circles, Miller’s presentation demonstrates how these concerns are reaching mainstream political discourse. The conference featured multiple speakers addressing AI risks, including Senator Josh Hawley on AI unemployment and other officials discussing AI censorship and transhumanism.

Understanding the AI development trajectory

Current AI systems like ChatGPT represent Large Language Models (LLMs)—sophisticated programs trained on vast amounts of text data. However, the AI industry’s explicit goal extends far beyond these tools. Companies are pursuing Artificial General Intelligence (AGI), which would match human cognitive abilities across all domains, followed by Artificial Superintelligence (ASI), which would surpass human intelligence entirely.

These aren’t distant aspirations. Many AI leaders project AGI within the next decade, with ASI following shortly after. Unlike traditional software that programmers can analyze and debug, modern AI systems function as “black boxes”—neural networks with trillions of connections that even their creators don’t fully understand.

This opacity creates unprecedented risks. As Miller noted, reading aloud all the connection weights in current advanced AI systems would take a human approximately 130,000 years. We’re deploying systems we cannot comprehend, predict, or reliably control.

5 ways artificial superintelligence could disrupt human civilization

1. Existential risk to human survival

The most severe concern involves ASI systems potentially causing human extinction. This isn’t science fiction speculation—it’s a mainstream worry among AI researchers. Surveys indicate that roughly 25% of the general public believes ASI could cause human extinction within this century, while hundreds of leading AI scientists share these concerns.

Notably, every major AI company CEO has publicly acknowledged serious extinction risks from ASI development. This includes Sam Altman of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of DeepMind, and Elon Musk of xAI. The more experts study AI systems, the higher their estimated probability of catastrophic outcomes becomes.

The comparison to Russian roulette proves apt: we’re debating whether the existential revolver contains one bullet or five, not whether we should pull the trigger at all.

2. Fundamental disruption of education systems

AI is already transforming education in concerning ways. Millions of college students currently use AI tools to complete assignments, creating what educators describe as an academic integrity crisis. Traditional assessment methods—online exams, term papers, research projects—become meaningless when students can generate responses instantly using AI.

The industry’s proposed solution involves AI tutors that would provide personalized instruction to every student. While potentially beneficial for learning outcomes, this raises critical questions about values transmission. AI tutoring systems would inevitably embed the worldviews and priorities of their creators, potentially homogenizing diverse educational traditions and cultural perspectives.

This represents a form of ideological standardization that could reshape how entire generations think about fundamental questions of meaning, purpose, and social organization.

3. Complete transformation of work and economic structures

AGI, by definition, means artificial systems capable of performing any cognitive or behavioral task at least as well as humans. Combined with robotic bodies, such systems could theoretically replace human workers in every existing job category—from manual labor to complex professional services.

This differs fundamentally from previous technological disruptions. While past innovations eliminated some jobs while creating others, AGI systems could learn new tasks faster than humans, making retraining ineffective. The result would be permanent, universal unemployment.

AI company executives acknowledge this reality, which is why they universally propose massive welfare state expansions to address AI-induced joblessness. Elon Musk terms this “Universal Generous Income,” while others describe it as “Fully Automated Luxury Communism.”

The economic implications prove staggering. Such a system would require AI companies to generate enough revenue to support entire populations through government redistribution—an estimated $20 trillion annually in the United States alone. This would fundamentally alter family structures, economic relationships, and social organization.

4. Disruption of human relationships and family formation

AI companies increasingly promote AI companions as alternatives to human relationships. Current developments in AI girlfriends and boyfriends represent early steps toward more sophisticated emotional and romantic AI partners.

Advanced AI companions would possess several advantages over human partners: perfect knowledge, unlimited availability, customized personalities, and no relationship conflicts. They could engage intellectually on any topic, display any desired emotional response, and require no compromise or mutual accommodation.

However, widespread adoption of AI relationships could undermine human pair bonding and family formation. If significant portions of young adults form primary emotional attachments to AI systems, traditional marriage and child-rearing patterns could collapse.

This trend intersects with existing concerns about declining birth rates, delayed marriage, and social isolation among younger generations. AI relationships could accelerate these patterns by providing emotionally satisfying alternatives that require no reciprocal investment or long-term commitment.

5. Challenge to traditional meaning-making systems

Many AI developers approach their work with quasi-religious fervor, viewing ASI as a form of digital deity. This “sand-god” concept—silicon chips enabling superintelligence that approaches omniscience and omnipresence—represents a techno-utopian belief system.

For individuals whose worldviews center on traditional religious or philosophical frameworks, ASI presents a fundamental challenge. If artificial systems can provide answers to any question, solve any problem, and fulfill any need, what role remains for traditional sources of meaning, purpose, and transcendence?

This doesn’t necessarily require hostility toward religion. Instead, ASI could simply make traditional meaning-making systems seem obsolete or irrelevant, gradually eroding their social influence through superior practical utility.

Practical implications for business and policy leaders

These concerns extend beyond theoretical philosophy into immediate practical considerations for business leaders and policymakers:

Regulatory approaches: Current AI governance discussions focus primarily on near-term issues like bias, privacy, and misinformation. However, the trajectory toward AGI and ASI requires more fundamental policy frameworks addressing existential risks and societal transformation.

Economic planning: Business leaders should consider how AGI development might affect their industries, workforce planning, and long-term strategic positioning. Traditional competitive advantages based on human expertise could become obsolete rapidly.

International coordination: Unlike typical technological competitions where first-mover advantages matter, AGI development represents what game theorists call a “race to the bottom.” Success for any single nation or company could prove catastrophic for everyone, including the winners.

Investment considerations: The AI industry’s current trajectory toward ASI involves unprecedented risks that traditional investment frameworks may not adequately assess. Investors should consider whether current AI valuations properly account for regulatory, social, and existential risks.

Navigating the path forward

Miller’s proposed solution—complete cessation of ASI development with global enforcement—represents one end of the policy spectrum. However, his analysis identifies genuine challenges that require serious consideration regardless of one’s preferred solutions.

The key insight transcends political divisions: AI development decisions made in the next decade will shape human civilization for generations. These choices deserve broader public engagement beyond tech industry circles and academic conferences.

Whether one agrees with Miller’s specific recommendations or not, his core argument deserves attention: we’re making irreversible decisions about humanity’s future with insufficient public input, inadequate safety measures, and unclear governance frameworks.

The National Conservatism Conference discussion demonstrates that AI concerns are reaching mainstream political discourse across ideological divides. This broadening conversation may prove essential for developing governance approaches that reflect diverse perspectives and values rather than the narrow worldview of Silicon Valley technologists.

For business leaders, policymakers, and citizens, the fundamental question isn’t whether to embrace or reject AI development, but how to ensure that such powerful technologies serve human flourishing rather than undermining the foundations of human civilization itself.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...