News/Superintelligence

Oct 13, 2025

New theory warns advanced AI could fragment humanity into 8 billion POVs

A new theory suggests that once artificial general intelligence (AGI) or artificial superintelligence (ASI) is achieved, humanity will fragment into radical factions as people treat advanced AI as an infallible oracle. The hypothesis warns that AI's tendency to provide personalized, accommodating advice to individual users could pit people against each other on an unprecedented scale, creating societal chaos through individualized guidance that ignores broader human values and social harmony. The fragmentation theory: AI systems designed to please individual users will provide personalized advice that inevitably conflicts with the needs and values of others, creating mass division at the individual level....

read
Oct 6, 2025

Alibaba CEO Eddie Wu unveils roadmap to artificial superintelligence

Alibaba CEO Eddie Wu announced the company's "Roadmap to Artificial Superintelligence" at the Alibaba Cloud conference in Hangzhou, making Alibaba the first major Chinese tech giant to explicitly invoke artificial general intelligence (AGI) and artificial superintelligence (ASI). This marks a notable shift in China's AI strategy, challenging Western perceptions that Chinese companies focus primarily on practical AI applications rather than pursuing advanced AI capabilities that could rival or surpass human intelligence. What you should know: Wu's presentation outlined Alibaba's vision for developing AI systems that match and then exceed human cognitive abilities. "Achieving AGI — an intelligent system with general...

read
Sep 26, 2025

Sam Altman will be “very surprised” if AI doesn’t surpass humans by 2030

OpenAI CEO Sam Altman predicts artificial intelligence will surpass human intelligence by 2030, with models capable of making scientific discoveries that humans cannot achieve independently. Speaking at the Axel Springer Award ceremony in Berlin, Altman outlined his vision for AI's rapid trajectory and OpenAI's plans to develop a "family of devices" that could fundamentally reshape how people interact with computers. Timeline for superintelligence: Altman expects AI models to demonstrate extraordinary capabilities well before the decade's end. "By the end of this decade, by 2030, if we don't have extraordinarily capable models that do things that we ourselves cannot do, I'd...

read

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Sep 24, 2025

Alibaba boosts AI spending beyond $53B, stock surges 107%

Alibaba's Hong Kong-listed shares surged over 6% on Wednesday after CEO Eddie Wu announced plans to increase AI spending beyond the company's existing $53 billion three-year investment commitment. The rally pushed Alibaba's year-to-date gains above 107%, reaching the stock's highest level since 2021 as investors responded positively to the company's expanded artificial intelligence ambitions. What you should know: Alibaba is doubling down on AI infrastructure and development with additional investments on top of its previously announced spending plan. The company initially committed 380 billion yuan ($53 billion) over three years in February for AI infrastructure development. CEO Eddie Wu said...

read
Sep 18, 2025

Live Science poll: 76% want AI development stopped or delayed over safety fears

A new Live Science poll reveals that 76% of over 1,700 readers believe artificial intelligence development should either be stopped immediately or significantly delayed due to safety concerns. However, 30% of respondents think it's already too late to halt AI's progression toward superintelligence, with many citing the irreversible nature of technological advancement and the global competitive dynamics driving AI research. What the poll found: The September survey exposed deep public anxiety about AI's trajectory toward potential superintelligence, known as the singularity.• 46% of the 1,787 respondents believe AI development must stop now because the risks are too great.• 30% think...

read
Sep 17, 2025

Google DeepMind’s Gemini 2.5 AI wins gold at international programming contest

Google DeepMind has achieved what it calls a "historic" AI breakthrough after its Gemini 2.5 model became the first AI to win a gold medal at an international programming competition, solving complex problems that stumped human programmers from top universities. The achievement represents a significant leap toward artificial general intelligence, with the model demonstrating advanced reasoning capabilities that could transform scientific and engineering disciplines. What happened: The AI model competed against 139 of the world's strongest college-level programmers at a competition in Azerbaijan, finishing second overall despite failing two of 12 tasks. In under 30 minutes, it solved a complex...

read
Sep 16, 2025

Why restricting AGI capabilities might backfire on safety researchers

AI safety researchers are grappling with a fundamental challenge: whether it's possible to limit what artificial general intelligence (AGI) knows without crippling its capabilities. The dilemma centers on preventing AGI from accessing dangerous knowledge like bioweapon designs while maintaining its potential to solve humanity's biggest problems, from curing cancer to addressing climate change. The core problem: Simply omitting dangerous topics during AGI training won't work because users can later introduce forbidden knowledge through clever workarounds. An evildoer could teach AGI about bioweapons by disguising the conversation as "cooking with biological components" or similar subterfuge. Even if AGI is programmed to...

read
Sep 12, 2025

Psychology professor warns AI could disrupt 5 core aspects of civilization

A psychology professor's warning about artificial intelligence recently sparked intense debate at a major conservative political conference, highlighting concerns that extend far beyond partisan politics. Speaking at the National Conservatism Conference in Washington DC, Geoffrey Miller outlined five fundamental ways that Artificial Superintelligence (ASI) could disrupt core aspects of human civilization—arguments that resonate across political divides for anyone concerned about technology's trajectory. Miller, who has studied AI development for over three decades, delivered his message to an audience of 1,200 political leaders, staffers, and conservative thought leaders, including several Trump administration officials. His central thesis: the AI industry's race toward...

read
Sep 12, 2025

“Learning how to learn”: Humans’ own inference ability will be key, says Nobel winner

Google DeepMind CEO Demis Hassabis, who recently won the 2024 Nobel Prize in chemistry, told an Athens audience that "learning how to learn" will be the most crucial skill for the next generation as AI rapidly transforms education and workplaces. Speaking at an ancient Roman theater beneath the Acropolis, the neuroscientist warned that artificial general intelligence could arrive within a decade, making continuous adaptation essential for career survival. What they're saying: Hassabis emphasized the unpredictable pace of AI development and its implications for future planning. "It's very hard to predict the future, like 10 years from now, in normal cases....

read
Sep 10, 2025

Could a new political party fill America’s dangerous AI safety gap?

The artificial intelligence industry is advancing at breakneck speed, with companies racing to develop increasingly powerful systems that could reshape society within the next decade. Yet despite widespread public concern about AI's potential risks—from mass unemployment to existential threats—the United States lacks a sustained political movement dedicated to ensuring these technologies develop safely. This gap represents both a critical vulnerability and a significant opportunity. While AI companies invest billions in capabilities research, government spending on AI safety remains minimal. Meanwhile, the competitive dynamics driving AI development create powerful incentives for companies to prioritize speed over caution, potentially leading to catastrophic...

read
Sep 9, 2025

NASA scientist proposes AI astronauts could replace humans on Mars missions

Planetary scientist Pascal Lee proposes that "artificial astronauts" — AI-powered humanoid robots with human-like physical capabilities — could serve as actual crew members on Mars missions within the coming decades. These space-rated artificial humans would eliminate the need for life support systems and consumables required by human astronauts, while potentially surpassing human capabilities in space exploration tasks. What you should know: The concept builds on rapid advances in robotics and artificial intelligence that could mature alongside planned Mars mission timelines. Lee, who chairs the Mars Institute and directs NASA's Haughton-Mars Project, presented this vision at a Space Robotics Workshop in...

read
Sep 1, 2025

Silicon Valley AI leaders turn to biblical language to describe their work amid unprecedented uncertainty

Silicon Valley's most influential artificial intelligence leaders are increasingly turning to biblical metaphors, apocalyptic predictions, and religious imagery to describe their work. This linguistic shift reveals something profound about how the tech industry views its own creations—and the existential questions AI development raises about humanity's future. From Geoffrey Hinton, the Nobel Prize-winning "Godfather of AI," warning about threats to religious belief systems, to OpenAI CEO Sam Altman describing humanity's transition from the smartest species on Earth, these leaders are framing AI development in terms that echo creation myths, prophecies, and divine transformation. This isn't mere marketing hyperbole—it reflects genuine uncertainty...

read
Aug 27, 2025

Bayou Brains: Meta commits $50B to Louisiana AI data center in superintelligence push

Meta's planned artificial intelligence data center in Louisiana will cost $50 billion, according to President Donald Trump, who revealed the figure during a Tuesday cabinet meeting. The facility in Richland Parish represents Meta's largest data center project and highlights the company's massive financial commitment to AI infrastructure as it pursues superintelligence capabilities. What you should know: Meta is building its largest data center in rural Louisiana's Richland Parish, designed to handle intense computational workloads for AI applications. The company has secured $29 billion in financing through U.S. bond giant PIMCO and alternative asset manager Blue Owl Capital to support the...

read
Aug 21, 2025

Meta walks back AI hiring amid talent war and restructuring

Meta has frozen hiring for its artificial intelligence research teams and is restructuring its AI division, marking a significant shift after months of aggressive spending to recruit top-tier talent. This pullback comes as the company faces pressure to compete with rivals following earlier setbacks in AI development, while CEO Mark Zuckerberg has publicly emphasized the need for progress toward superintelligence—AI systems that can outperform humans on cognitive tasks. The big picture: Meta's hiring freeze reflects the broader challenges facing Big Tech companies as they navigate the expensive reality of building competitive AI capabilities in an increasingly crowded market. What you...

read
Aug 15, 2025

Big Think AGI hype may be diverting focus from practical AI regulation needs

A new analysis argues that artificial general intelligence (AGI) hype from the AI industry serves as a strategic distraction that benefits companies by shifting policy focus away from immediate regulatory concerns. The argument suggests that by emphasizing existential AGI risks, the industry can operate with fewer constraints on current narrow AI applications while harvesting profits from controllable technologies. The core argument: Industry incentives align with promoting AGI-focused policies regardless of whether AGI actually emerges. If AGI doesn't happen, loose regulation allows companies to profit from narrow AI with minimal guardrails on issues like intellectual property, algorithmic transparency, or market concentration....

read
Aug 14, 2025

Pointless privilege? MIT student drops out over fears AGI will cause human extinction

An MIT student dropped out of college in 2024, citing fears that artificial general intelligence (AGI) will cause human extinction before she can graduate. Alice Blair, who enrolled at MIT in 2023, now works as a technical writer at the Center for AI Safety, a nonprofit organization focused on reducing AI risks, and represents a growing concern among some students about AI's existential risks, even as the broader tech industry continues pushing toward AGI development. What she's saying: Blair's decision was driven by genuine fear about humanity's survival timeline in relation to AGI development. "I was concerned I might not...

read
Aug 12, 2025

Character.AI pivots from AGI to entertainment with 20M monthly users

Character.AI has pivoted from its original mission of building artificial general intelligence to focus on AI entertainment, with new CEO Karandeep Anand announcing the company now serves 20 million monthly active users who spend an average of 75 minutes daily on the platform. The strategic shift comes after Google's $2.7 billion licensing deal last August and mounting safety concerns following a wrongful death lawsuit, positioning the startup to compete in the rapidly growing AI entertainment market rather than the costly AGI development race. What you should know: Character.AI has fundamentally changed its business model and technical approach under new leadership....

read
Aug 6, 2025

Elite students are dropping out of Harvard, MIT to prevent AI extinction of both humans and jobs

College students at elite universities like Harvard and MIT are dropping out to work on preventing artificial general intelligence (AGI) from potentially causing human extinction, driven by fears that superintelligent AI could arrive within the next decade. This exodus reflects growing anxiety among young people about both existential AI risks and the possibility that their future careers will be automated away before they even begin. What you should know: Students are abandoning prestigious academic programs to join AI safety organizations and startups, believing the threat is too urgent to wait. Alice Blair took permanent leave from MIT to work as...

read
Aug 1, 2025

Take that, Oppenheimer: Meta offers AI researcher $250M over 4 years in talent war

Meta recently offered AI researcher Matt Deitke $250 million over four years—an average of $62.5 million annually—shattering every historical precedent for scientific compensation. The 24-year-old's package is 327 times what Manhattan Project leader J. Robert Oppenheimer earned while developing the atomic bomb, reflecting Silicon Valley's belief that the race for artificial general intelligence could reshape civilization and create trillions in market value. The big picture: Tech companies are treating AI talent like irreplaceable assets rather than well-compensated professionals, driven by the conviction that whoever achieves artificial general intelligence first could dominate markets worth trillions. Meta CEO Mark Zuckerberg reportedly offered...

read
Jul 30, 2025

Meta reports 22% revenue jump to $47.5B as CEO pitches personal AI

Meta smashed Wall Street expectations in Q2 2025, reporting $47.52 billion in revenue (up 22%) and $18.34 billion in net profit (up 36%), while CEO Mark Zuckerberg outlined his vision to "bring personal superintelligence to everyone." The tech giant now reaches 3.48 billion daily active users across its family of apps and is making massive AI investments to compete with Google and OpenAI in the race toward artificial general intelligence. What you should know: Meta's financial performance significantly exceeded analyst predictions across all key metrics. Revenue hit $47.52 billion versus the expected $44.8 billion, with earnings per share of $7.14...

read
Jul 30, 2025

All-In: Meta abandons open-source AI for future superintelligent systems

Meta CEO Mark Zuckerberg has announced that the company's future superintelligent AI will not be open source, marking a significant reversal from his previous commitment to open AI development. This shift represents a major policy change for one of the tech industry's most vocal advocates for open-source AI, potentially reshaping how the most advanced AI systems are developed and distributed. What you should know: Zuckerberg published a manifesto Wednesday declaring that "developing superintelligence is now in sight" but cited safety concerns as the reason for abandoning open-source principles for future advanced AI. "We believe the benefits of superintelligence should be...

read
Jul 30, 2025

Zuckerberg says superintelligence is “now in sight” as Meta poaches top AI talent

Mark Zuckerberg announced that developing superintelligence is "now in sight" and outlined Meta's vision for "personal superintelligence" that empowers individuals rather than automating jobs. The statement comes after Meta's aggressive recruitment spree that has poached top AI researchers from OpenAI, Google, and Apple with multi-hundred million-dollar pay packages, positioning the company to compete directly with OpenAI's vision of AI replacing human work. What you should know: Zuckerberg's vision directly challenges OpenAI's approach to artificial general intelligence, which focuses on "highly autonomous systems that outperform humans at most economically valuable work." Meta believes superintelligence should be "a tool for personal empowerment"...

read
Jul 25, 2025

Meta hires ChatGPT co-creator as chief scientist for $14B AI push

Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as chief scientist of Meta Superintelligence Labs. This high-profile hire represents Meta's aggressive push into advanced AI research, as the company positions itself to compete directly with OpenAI in the race toward artificial general intelligence. What you should know: Zhao brings extensive experience from OpenAI's most significant AI breakthroughs to Meta's new superintelligence initiative. Beyond co-creating ChatGPT, Zhao helped build OpenAI's GPT-4, mini models, 4.1 and o3, and previously led synthetic data development at the company. He will work directly with Zuckerberg and Alexandr Wang, the...

read
Jul 18, 2025

Meta poaches 2 Apple AI researchers for Superintelligence Labs team

Meta Platforms has hired two Apple AI researchers, Mark Lee and Tom Gunter, for its Superintelligence Labs team, according to Bloomberg. The recruitment follows Meta's recent poaching of Apple's AI chief, signaling an intensified talent war between the tech giants as they compete for top artificial intelligence expertise. What you should know: The hirings represent Meta's continued efforts to strengthen its AI capabilities by targeting Apple's research talent. Mark Lee has already started at Meta after leaving Apple in recent days, while Tom Gunter will begin work in the near future. Both researchers will join Meta's Superintelligence Labs team, which...

read
Load More