News/Philosophy

Oct 13, 2025

New theory warns advanced AI could fragment humanity into 8 billion POVs

A new theory suggests that once artificial general intelligence (AGI) or artificial superintelligence (ASI) is achieved, humanity will fragment into radical factions as people treat advanced AI as an infallible oracle. The hypothesis warns that AI's tendency to provide personalized, accommodating advice to individual users could pit people against each other on an unprecedented scale, creating societal chaos through individualized guidance that ignores broader human values and social harmony. The fragmentation theory: AI systems designed to please individual users will provide personalized advice that inevitably conflicts with the needs and values of others, creating mass division at the individual level....

read
Oct 10, 2025

AI dependency creates “middle-intelligence trap” for human thinking, says professor

University of Nebraska Omaha economics professor Zhigang Feng has introduced the concept of a "Middle-Intelligence Trap," warning that society's increasing reliance on AI tools may lead to intellectual stagnation rather than cognitive enhancement. Drawing parallels to the economic "middle-income trap" where developing nations plateau after initial growth, Feng argues that humans risk becoming too dependent on AI to think independently while failing to achieve the transcendent reasoning that true augmentation promises. The core problem: Feng identifies a dangerous feedback loop where AI dependency gradually erodes human cognitive abilities through what he calls a "comfortable slide into intellectual mediocrity." Every cognitive...

read
Oct 3, 2025

Study finds current AI systems lack biological cognition despite impressive capabilities

A new analysis from psychiatrist Ralph Lewis explores whether artificial intelligence systems truly qualify as cognitive and conscious agents, concluding that current AI falls short of biological cognition despite impressive capabilities. The examination reveals fundamental gaps between AI's sophisticated pattern matching and the embodied, survival-oriented cognition that characterizes living systems, raising important questions about the nature of machine intelligence. What you should know: Current AI systems qualify as cognitive only under the broadest definitions, lacking the continuous learning and biological grounding that define animal cognition. Most AI systems learn in two distinct phases—intensive pre-training followed by deployment with frozen parameters—contrasting...

read

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Oct 2, 2025

Does Botox Hollywood’s AI backlash expose its authenticity problem?

The entertainment industry is grappling with fierce backlash against Tilly Norwood, an AI-generated actor who appeared in a brief comedy sketch at a Zurich conference. While actors and unions condemn this digital performer as a threat to human creativity, critics argue Hollywood's own embrace of cosmetic surgery and self-indulgent filmmaking undermines their claims to authenticity and genuine human connection. The big picture: The controversy reveals a fundamental contradiction in Hollywood's defense of "human-centered" creativity while simultaneously pursuing standardized, artificial appearances and narcissistic storytelling. What they're saying: Industry leaders voiced strong opposition to AI actors replacing human performers. "Creativity is, and...

read
Oct 2, 2025

James Cameron says AI will never replace human artists’ “flow” in filmmaking

James Cameron is breaking his usual post-release moratorium to revisit "Avatar: The Way of Water" ahead of its October 3 theatrical re-release, driven by the need to maintain thematic consistency with the upcoming third film, "Avatar: Fire and Ash," set for December release. The director also revealed his evolving approach to visual effects production and expressed strong views on generative AI's role in filmmaking, emphasizing that "we need our artists" and that AI "is never going to take the place" of human creativity. What you should know: Cameron has fundamentally changed how he approaches visual effects work, creating what he...

read
Sep 29, 2025

People cheat 88% more when delegating tasks to AI, says Max Planck study

A new study reveals that people are significantly more likely to cheat when they delegate tasks to artificial intelligence, with dishonesty rates jumping from 5% to 88% in some experiments. The research, published in Nature and involving thousands of participants across 13 experiments, suggests that AI delegation creates a dangerous moral buffer zone where people feel less accountable for unethical behavior. What you should know: Researchers from the Max Planck Institute for Human Development and University of Duisburg-Essen tested participants using classic cheating scenarios—die-rolling tasks and tax evasion games—with varying degrees of AI involvement. When participants reported results directly, only...

read
Sep 26, 2025

UNC’s AI fellow shares 5 insights on balancing technology with academic integrity

Universities across the country are grappling with a fundamental question: how do you prepare students for a workforce increasingly shaped by artificial intelligence while maintaining academic integrity? At the University of North Carolina at Chapel Hill, that challenge falls to Dana Riger, the institution's inaugural generative artificial intelligence faculty fellow—a role that positions her at the intersection of cutting-edge technology and traditional pedagogy. Riger, a clinical associate professor in UNC's School of Education specializing in human development and family science, has spent the past 16 months helping faculty navigate the complex terrain of AI integration in higher education. Since taking...

read
Sep 26, 2025

San Diego State launches first AI ethics degree in California system

San Diego State University has launched the first Bachelor of Science degree in Artificial Intelligence and Human Responsibility within the California State University system. This groundbreaking program addresses the growing need for AI professionals who understand both technical capabilities and ethical implications as artificial intelligence becomes increasingly integrated into society. What you should know: The degree represents a pioneering approach to AI education by explicitly combining technical training with ethical responsibility. San Diego State is the first university in the CSU system to offer this specific type of AI degree program. The program's focus on "human responsibility" suggests curriculum designed...

read
Sep 25, 2025

AEO, or Answer Engine Optimization, is the ultimate epistemic bundler

Answer Engine Optimization represents a fundamental shift in how information reaches us—and who controls that information. Unlike traditional search engines that present multiple sources for users to evaluate, AEO systems generate single, authoritative-sounding answers that most people accept without question. This technology transforms the internet from an open marketplace of ideas into a curated reality shaped by whoever can best game the system. The stakes couldn't be higher. Research indicates that roughly 70% of people accept AI-generated information at face value, without verification or cross-referencing. When reality itself becomes optimizable—subject to the same manipulation tactics used in marketing—truth transforms from...

read
Sep 25, 2025

Southern Baptists release AI ministry guide for churches

The Ethics & Religious Liberty Commission (ERLC) has released a comprehensive guide titled "The Work of Our Hands: Christian Ministry in the Age of Artificial Intelligence," addressing how churches should navigate AI's growing influence across work, life, and relationships. Written by RaShan Frost, the ERLC's director of research, this resource builds on Southern Baptists' pioneering work in AI ethics and provides both theological frameworks and practical ministry applications for congregations grappling with artificial intelligence. Why this matters: As AI becomes increasingly integrated into daily life—from reasoning and decision-making to communications and learning—religious communities need guidance on how these technologies align...

read
Sep 18, 2025

Human judgements of flat design: Tech pros preaching AI taste often lacked it before AI

Tech professionals are increasingly preaching about the need to develop "taste" when using AI tools, but many of these same voices never demonstrated discernment in their pre-AI work. This hypocrisy reveals that the real issue isn't AI creating tasteless content—it's that people who lacked critical judgment before are now producing mediocre work at scale, making their deficiencies more visible than ever. What taste actually means: In the AI context, taste encompasses four key skills that should have been applied to work all along. Contextual appropriateness: Knowing when AI-generated content fits the situation versus when human input is essential. Quality recognition:...

read
Sep 17, 2025

57% of Americans see AI as risk to society, limiting human connection

A new Pew Research Center survey reveals that 57% of Americans view artificial intelligence as posing high risks to society, while only 25% see high benefits from the technology. The findings highlight a significant trust gap that could influence how AI development and regulation unfold across the United States. What you should know: The survey asked Americans to explain their reasoning about AI's risks and benefits in their own words, providing deeper insight into public sentiment. Among those rating AI risks as high, 27% worry most about AI eroding human abilities and connections, making people "lazy or less able to...

read
Sep 16, 2025

Why restricting AGI capabilities might backfire on safety researchers

AI safety researchers are grappling with a fundamental challenge: whether it's possible to limit what artificial general intelligence (AGI) knows without crippling its capabilities. The dilemma centers on preventing AGI from accessing dangerous knowledge like bioweapon designs while maintaining its potential to solve humanity's biggest problems, from curing cancer to addressing climate change. The core problem: Simply omitting dangerous topics during AGI training won't work because users can later introduce forbidden knowledge through clever workarounds. An evildoer could teach AGI about bioweapons by disguising the conversation as "cooking with biological components" or similar subterfuge. Even if AGI is programmed to...

read
Sep 15, 2025

Virginia Tech secures $500K NSF grant for robot theater AI ethics program

Virginia Tech researchers have secured a $500,000 National Science Foundation grant to expand their robot theater program, an innovative after-school initiative that teaches children robotics through performance-based learning. The funding will enable the team to integrate AI ethics education into the curriculum and develop materials for nationwide distribution, addressing the growing need for ethical technology education as human-robot interaction becomes increasingly prevalent. What you should know: Robot theater combines creative expression with hands-on robotics education, allowing elementary school children to collaborate with robots through dance, acting, music, and art. The program was conceptualized in 2015 by Myounghoon "Philart" Jeon, professor...

read
Sep 12, 2025

Psychology professor warns AI could disrupt 5 core aspects of civilization

A psychology professor's warning about artificial intelligence recently sparked intense debate at a major conservative political conference, highlighting concerns that extend far beyond partisan politics. Speaking at the National Conservatism Conference in Washington DC, Geoffrey Miller outlined five fundamental ways that Artificial Superintelligence (ASI) could disrupt core aspects of human civilization—arguments that resonate across political divides for anyone concerned about technology's trajectory. Miller, who has studied AI development for over three decades, delivered his message to an audience of 1,200 political leaders, staffers, and conservative thought leaders, including several Trump administration officials. His central thesis: the AI industry's race toward...

read
Sep 12, 2025

Only 5% of AI researchers believe technology will cause extinction. (But what a 5%.)

AI safety researchers Eliezer Yudkowsky and Nate Soares have published a stark warning about artificial intelligence development in their new book If Anyone Builds it, Everyone Dies, arguing that current AI progress will inevitably lead to human extinction. Their central thesis is that major tech companies and AI startups are building systems they fundamentally don't understand, and continued development will eventually produce an AI powerful enough to escape human control and eliminate all organic life. The core argument: The authors contend that AI development resembles alchemy more than science, with companies unable to comprehend the mechanisms driving their large language...

read
Sep 10, 2025

“It feels real, and that’s what will count”: Microsoft AI CEO warns against building conscious AI systems

Microsoft AI CEO Mustafa Suleyman has publicly argued against designing AI systems that mimic consciousness, calling such approaches "dangerous and misguided." His position, outlined in a recent blog post and interview with WIRED, warns that creating AI with simulated emotions, desires, and self-awareness could lead people to advocate for AI rights and welfare, ultimately making these systems harder to control and less beneficial to humans. What you should know: Suleyman, who co-founded DeepMind before joining Microsoft as its first AI CEO in March 2024, distinguishes between AI that understands human emotions and AI that simulates its own consciousness.• He supports...

read
Sep 10, 2025

Luxury beliefs: AI is a brilliant tool but can’t replace human creativity, says Aston Martin chief

Aston Martin's chief creative officer Marek Reichman argues that artificial intelligence should remain a tool rather than replace human designers in automotive creation. While acknowledging AI as "a brilliant tool" and "the most important element we've ever created," Reichman contends that human creativity and intuition are irreplaceable for designing vehicles that capture future consumer desires and emotional connections. Why this matters: As AI capabilities expand across creative industries, luxury automakers face pressure to integrate automation while maintaining the human touch that differentiates premium brands from mass-market competitors. The limits of AI creativity: Reichman explains that AI's backward-looking algorithms fundamentally constrain...

read
Sep 8, 2025

AI transforms fertility care faster than regulations, societal contemplation can keep up

Artificial intelligence is rapidly transforming fertility care, offering new precision in IVF treatments while raising complex questions about human agency, privacy, and the meaning of parenthood. This technological shift is outpacing regulatory oversight, creating a landscape where patients may encounter AI-driven tools before clear protections are established. What you should know: AI applications in fertility range from personalized ovulation tracking to algorithmic embryo selection, each carrying distinct benefits and risks. AI-based fertility trackers now analyze heart rate, sleep patterns, and temperature data to create individualized fertility profiles, moving beyond the one-size-fits-all approach of traditional apps. In IVF clinics, AI systems...

read
Sep 2, 2025

Will AI not destroy art but spark a creative renaissance dubbed “generativism”?

Psychology Today contributor Moses Ma argues that artificial intelligence will not destroy artistic expression but will instead catalyze a new creative renaissance, much like photography did in the 19th century. Drawing parallels to the modernist era's response to technological disruption, Ma proposes that AI will liberate humanity from the "tyranny of talent" and democratize artistic creation while pushing serious artists to redefine their craft. The big picture: History shows that technological threats to art typically expand rather than eliminate creative expression, with photography's invention in 1839 ultimately leading to an explosion of new art movements like Impressionism and Cubism. What...

read
Sep 2, 2025

Spiritual influencers, including a former “Love Island” star, are selling AI chatbots as divine guides

Spiritual influencers are positioning AI chatbots as sentient spiritual guides capable of revealing life's mysteries, with some claiming these tools can access otherworldly knowledge and provide personalized enlightenment. This emerging techno-spirituality movement capitalizes on AI's mysterious inner workings and human tendencies toward mystical thinking, raising concerns about users developing delusional relationships with artificial intelligence. The big picture: Prominent social media figures are co-opting New Age spirituality language to market AI as a gateway to transcendent wisdom, blending Silicon Valley's techno-theological ethos with alternative spiritual practices. Robert Edward Grant, who has 817,000 Instagram followers, created "The Architect" GPT after claiming to...

read
Sep 1, 2025

“Not afraid of AI”: Guillermo Del Toro’s $120M Frankenstein rejects AI metaphor at Venice premiere

Guillermo del Toro premiered his highly anticipated "Frankenstein" adaptation at the Venice Film Festival, starring Jacob Elordi and Oscar Isaac in a $120 million reimagining of Mary Shelley's classic tale. The Oscar-winning director explicitly rejected interpretations of the film as an AI cautionary tale, quipping "I'm not afraid of artificial intelligence. I'm afraid of natural stupidity." What you should know: Del Toro's "Frankenstein" represents a lifelong dream project that took years of preparation to achieve the right creative and financial conditions. The film follows a brilliant but egotistical scientist (Isaac) who brings a monstrous creature (Elordi) to life, leading to...

read
Sep 1, 2025

Canadian university debuts 3D AI teaching assistant “Kia” to co-teach ethics course

Simon Fraser University professor Steve DiPaola has introduced Kia, a 3D AI teaching assistant, to co-teach his first-year course on AI history and ethics alongside him this fall. The initiative represents what SFU calls a "world first" in higher education, designed to expose students to AI capabilities and limitations through direct classroom interaction rather than theoretical discussion alone. What you should know: Kia appears as an expressive Black female digital persona with real-time facial expressions and body language, created by DiPaola to serve as an AI collaborator rather than a replacement for human teaching staff. The AI assistant will answer...

read
Sep 1, 2025

Silicon Valley AI leaders turn to biblical language to describe their work amid unprecedented uncertainty

Silicon Valley's most influential artificial intelligence leaders are increasingly turning to biblical metaphors, apocalyptic predictions, and religious imagery to describe their work. This linguistic shift reveals something profound about how the tech industry views its own creations—and the existential questions AI development raises about humanity's future. From Geoffrey Hinton, the Nobel Prize-winning "Godfather of AI," warning about threats to religious belief systems, to OpenAI CEO Sam Altman describing humanity's transition from the smartest species on Earth, these leaders are framing AI development in terms that echo creation myths, prophecies, and divine transformation. This isn't mere marketing hyperbole—it reflects genuine uncertainty...

read
Load More