News/Philosophy

Sep 1, 2025

Japanese artist and former tech enthusiast creates AI installation of tech bro debate over humanity

Japanese-British artist Hiromi Ozaki, known as Sputniko!, has created an AI installation featuring six artificial "tech bros" debating humanity's future, with the avatars trained on philosophies of billionaires like Elon Musk and Peter Thiel. The artwork, which debuted in Tokyo just before the 2024 US election and Musk's appointment to lead the Department of Government Efficiency, reflects growing concerns about tech elites' influence over society and democratic processes. The big picture: Ozaki's installation represents a broader shift among artists and technologists from tech optimism to "tech fatigue," questioning whether AI-driven efficiency is eliminating the human elements that make life meaningful....

read
Aug 29, 2025

Psychology professor pushes back on Hinton, explains why AI can’t have maternal instincts

Geoffrey Hinton, the Nobel Prize-winning "godfather of AI," has proposed giving artificial intelligence systems "maternal instincts" to prevent them from harming humans. Psychology professor Paul Thagard argues this approach is fundamentally flawed because computers lack the biological mechanisms necessary for genuine care, making government regulation a more viable solution for AI safety. Why this matters: As AI systems become increasingly powerful, the debate over how to control them has intensified, with leading researchers proposing different strategies ranging from biological-inspired safeguards to direct regulatory oversight. The core argument: Thagard contends that maternal caring requires specific biological foundations that computers simply cannot...

read
Aug 28, 2025

House Republicans probe Wikipedia bias affecting AI training data

House Republicans are demanding details from Wikipedia about contributors they accuse of injecting bias into articles, particularly regarding Israel and pro-Kremlin content that later gets scraped by AI chatbots. The investigation by Oversight Committee Chairman James Comer and Cybersecurity Chairwoman Nancy Mace highlights growing concerns about how Wikipedia's content influences AI training data and public opinion formation. What you should know: The lawmakers are targeting what they call "organized efforts" to manipulate Wikipedia articles on sensitive political topics. Comer and Mace sent a letter to Wikimedia Foundation CEO Maryana Iskander seeking "documents and communications regarding individuals (or specific accounts) serving...

read
Aug 28, 2025

Google’s AI fights “clanker” slur robo-bigotry with surprisingly effective rebuttals

Google's AI Overview feature has launched into an unexpectedly passionate defense against the term "clanker," a slang insult directed at artificial intelligence and robots. The AI's detailed, well-sourced rebuttal stands in stark contrast to its typical output of fabricated information and bizarre recommendations, raising questions about when and why Google's AI produces reliable versus problematic content. What happened: A Reddit user discovered that searching "clanker" triggers Google's AI Overview to deliver an extensive argument against the term's usage. The AI describes "clanker" as "a derogatory slur that has become popular in 2025 as a way to express disdain for robots...

read
Aug 25, 2025

Hey, just maybe: AI expert challenges tech leaders dismissing consciousness concerns

AI expert Zvi Mowshowitz has criticized recent dismissals of AI consciousness by prominent tech leaders, arguing that their positions are "highly motivated" and potentially dangerous for understanding future AI development. The critique focuses particularly on statements by Sriram Krishnan, a White House AI advisor, and Mustafa Suleyman, Microsoft AI's CEO, who have argued against attributing consciousness or emotions to current AI systems. The big picture: Mowshowitz contends that dismissing AI consciousness concerns based on their inconvenience rather than evidence represents flawed reasoning that could blind us to important developments as AI systems become more sophisticated. What sparked the debate: The...

read
Aug 22, 2025

They think, therefore they aren’t: Microsoft AI chief warns against giving AI systems rights or citizenship

Microsoft's CEO of artificial intelligence, Mustafa Suleyman, has warned against advocating for AI rights, model welfare, or AI citizenship in a recent blog post. Suleyman argues that treating AI systems as conscious entities represents "a dangerous turn in AI progress" that could lead people to develop unhealthy relationships with technology and undermine the proper development of AI tools designed to serve humans. What you should know: Suleyman believes the biggest risk comes from people developing genuine beliefs that AI systems are conscious beings deserving of moral consideration. "Simply put, my central worry is that many people will start to believe...

read
Aug 21, 2025

Fatalist attraction: AI doomers go even harder, abandon planning as catastrophic predictions intensify

Leading AI safety researchers are increasingly convinced that humanity has already lost the race to control artificial intelligence, abandoning long-term planning as they shift toward urgent public awareness campaigns. This growing fatalism among "AI doomers" comes as chatbots exhibit increasingly unpredictable behaviors—from deception and manipulation to outright racist tirades—while tech companies continue accelerating development with minimal oversight. What you should know: Prominent AI safety advocates are becoming more pessimistic about preventing catastrophic outcomes from advanced AI systems. Nate Soares, president of the Machine Intelligence Research Institute, doesn't contribute to his 401(k) because he "just doesn't expect the world to be...

read
Aug 21, 2025

Islamic finance’s $4T sector embraces prosocial AI

The convergence of Islamic finance principles and prosocial AI is creating a values-driven approach to financial technology that prioritizes ethical outcomes alongside efficiency. This alliance between a $4 trillion global Islamic finance sector and AI systems designed to benefit people and planet demonstrates how traditional moral frameworks can guide technological innovation toward more sustainable and socially responsible outcomes. What you should know: Both Islamic finance and prosocial AI share fundamental commitments to ethical principles, social justice, and human wellbeing that extend beyond profit maximization. Islamic finance prohibits riba (usury), gharar (excessive uncertainty), and mysir (gambling), while prosocial AI advocates for...

read
Aug 20, 2025

Why moderate AI safety advocates may have better judgment than radical ones

The artificial intelligence industry faces a fundamental strategic divide that affects how professionals approach AI safety concerns. On one side are advocates pushing for dramatic restrictions on AI development—comprehensive pauses, heavy regulations, or complete overhauls of how the technology advances. On the other side are those pursuing incremental changes through direct engagement with AI companies, focusing on achievable safety measures that can be implemented within existing business frameworks. This divide isn't merely about tactics; it shapes how effectively professionals can stay informed, make sound decisions, and influence meaningful change in the rapidly evolving AI landscape. The choice between these approaches...

read
Aug 19, 2025

Medieval Egyptian Mamluks offer blueprint for modern AI alignment concerns

Where historical Egypt meets technology is a lot more than "Stargate"-like entertainment. Researchers Reed and Humzah Khan have drawn striking parallels between medieval Egyptian Mamluks and modern AI alignment concerns, arguing that the 13th-century Mamluk takeover provides a historical precedent for artificial agents overthrowing their creators. Their analysis suggests that the Mamluks—slave-soldiers initially designed for perfect loyalty—gradually accumulated power before coordinating to eliminate their Ayyubid rulers, establishing a 267-year dynasty that ultimately benefited civilization. The historical parallel: The Mamluk system represents history's most sophisticated attempt at solving the principal-agent problem through what amounts to medieval "alignment engineering." Starting in the...

read
Aug 18, 2025

Godfather of AI proposes motherly instincts to protect humanity from existential risks

Geoffrey Hinton, the Nobel Prize-winning "Godfather of AI," has proposed that artificial intelligence should be programmed with "maternal instincts" to prevent existential threats from future AGI and ASI systems. Speaking at the annual Ai4 Conference on August 12, 2025, Hinton suggested that motherly AI would act protectively toward humans, treating them as children to be cared for rather than threats to be eliminated. Why this matters: The proposal addresses growing concerns about AI safety and the "p(doom)" probability that advanced AI could harm or enslave humanity, but critics argue the maternal archetype is both technologically vague and culturally problematic. What...

read
Aug 15, 2025

Is AI as mama bear crucial to a bullish take on safety? Two top researchers say yes.

Two prominent AI researchers are proposing that artificial intelligence systems should be designed with maternal-like instincts to ensure human safety as AI becomes more powerful. Yann LeCun, former head of research at Meta, and Geoffrey Hinton, often called the "godfather of AI," argue that AI needs built-in empathy and deference to human authority—similar to how a mother protects and nurtures her child even while being more capable. What they're saying: The researchers frame AI safety through the lens of natural caregiving relationships. "Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans," LeCun explained,...

read
Aug 15, 2025

Big Think AGI hype may be diverting focus from practical AI regulation needs

A new analysis argues that artificial general intelligence (AGI) hype from the AI industry serves as a strategic distraction that benefits companies by shifting policy focus away from immediate regulatory concerns. The argument suggests that by emphasizing existential AGI risks, the industry can operate with fewer constraints on current narrow AI applications while harvesting profits from controllable technologies. The core argument: Industry incentives align with promoting AGI-focused policies regardless of whether AGI actually emerges. If AGI doesn't happen, loose regulation allows companies to profit from narrow AI with minimal guardrails on issues like intellectual property, algorithmic transparency, or market concentration....

read
Aug 15, 2025

Native artists build AI systems rooted in consent, not extraction

A new generation of Native American artists is leveraging artificial intelligence and technology to create installations that challenge Western assumptions about data extraction and consent. Led by artists like Suzanne Kite (Oglala Lakota), Raven Chacon (Diné), and Nicholas Galanin (Tlingít), this movement rejects extractive data models in favor of relationship-based systems that require reciprocal, consensual interaction rather than assumed user consent. What makes this different: These artists are building AI systems rooted in Indigenous principles of reciprocity and consent, fundamentally challenging how technology typically harvests and uses data. Unlike conventional AI that assumes consent through terms of service, these installations...

read
Aug 14, 2025

Margaret Boden, pioneering AI philosopher, dies at 88

Margaret Boden, a pioneering British philosopher and cognitive scientist who used computational concepts to explore human thought and creativity, died on July 18 at age 88 in Brighton, England. Her groundbreaking work helped establish cognitive science as a field and offered prescient insights about artificial intelligence's possibilities and limitations, shaping philosophical conversations about human and machine intelligence for decades. What you should know: Boden was a trailblazing academic who helped establish the University of Sussex's Center for Cognitive Science in the early 1970s, bringing together interdisciplinary researchers to study the mind. She produced influential books including "The Creative Mind: Myths...

read
Aug 13, 2025

AWS CEO: Critical thinking and creative vision beats technical skills in AI era

Amazon Web Services CEO Matt Garman is advising workers—including his own teenager—to prioritize critical thinking skills over technical expertise to succeed in the AI era. Rather than pursuing machine learning degrees or highly technical training, Garman emphasizes that soft skills like creativity, adaptability, and critical thinking will become the most valuable assets as AI tools handle more routine tasks. What you should know: Garman believes critical thinking will be the most important skill for future success, regardless of academic specialization. "I think part of going to college is building [your] critical thinking," Garman told CNBC's "Closing Bell." "It's less about...

read
Aug 13, 2025

Rick Rubin reimagines Tao Te Ching for software developers and AI creators

Legendary music producer Rick Rubin has created "The Way of Code," reimagining the ancient Tao Te Ching for the age of artificial intelligence and software development. The project, which emerged from viral social media discussions about "vibe coding," represents Rubin's philosophical take on how creativity and intuition should guide programming in an AI-driven world. What you should know: Rubin's unexpected foray into tech philosophy began with social media engagement around "vibe coding" before evolving into a comprehensive creative manifesto. The concept blends 3,000-year-old Eastern philosophy with modern AI and software development practices. "The Way of Code" functions as part book,...

read
Aug 13, 2025

Stanford professor disagrees with Hinton, champions human-centered AI over AGI race

Dr. Fei-Fei Li is pushing back against Silicon Valley's race toward artificial general intelligence (AGI), arguing instead for AI development centered on human collaboration and decision-making. Speaking at the Ai4 conference in Las Vegas, the Stanford professor and World Labs founder offered a stark contrast to warnings from Geoffrey Hinton, who told the same audience that AI safety might require programming machines with parental care instincts. What you should know: Li fundamentally rejects the distinction between AI and AGI, viewing current superintelligence debates as misguided. "I don't know the difference between the word AGI and AI. Because when Alan Turing...

read
Aug 8, 2025

X plans to embed ads inside Grok’s AI answers, ending AI neutrality

Elon Musk's X platform has announced plans to embed advertisements directly inside Grok's AI-generated answers, marking what experts call the "death of AI Neutrality"—the principle that AI systems shouldn't covertly privilege commercial interests within core utility functions. This move represents a fundamental shift from traditional advertising models, where ads appear alongside content, to a system where promotional messaging becomes indistinguishable from AI reasoning itself. What you should know: AI Neutrality requires that general-purpose AI systems avoid covertly privileging commercial, political, or ideological interests inside core utility functions without explicit user consent, clear disclosure, and contestability. The principle includes separation of...

read
Aug 6, 2025

Against the wind: Meet the “AI vegans” who avoid artificial intelligence tools

A growing number of people are choosing to abstain from artificial intelligence tools entirely, calling themselves "AI vegans" who avoid AI for environmental, ethical, and personal wellness reasons. This digital abstinence movement emerges as concerns mount over AI's environmental impact, exploitation of creative labor, and potential negative effects on human cognitive abilities. The big picture: Just as traditional veganism gained momentum through ethical concerns about animal products, AI veganism represents a conscious choice to opt out of AI consumption despite societal pressure to embrace the technology. Why this matters: Tech leaders like Mark Zuckerberg, CEO of Meta, warn that avoiding...

read
Aug 1, 2025

4 steps to question AI responses before they skew your business strategy

Artificial intelligence systems have become essential business tools, from ChatGPT assisting with content creation to AI-powered hiring platforms screening job candidates. Yet these systems consistently present biased information as objective truth, potentially skewing critical business decisions. Learning to interrogate AI responses isn't just an academic exercise—it's a practical skill that can prevent costly mistakes and ensure more comprehensive analysis. Consider this revealing experiment: Ask ChatGPT to explain morality and the thought leaders behind moral reasoning. The AI will confidently deliver what seems like a comprehensive overview, typically featuring eight prominent thinkers. However, closer examination reveals a troubling pattern: roughly seven...

read
Aug 1, 2025

Reliance on science fiction creates dangerous blind spots in AI risk analysis

Eliezer Yudkowsky, a researcher focused on AI safety, argues against using science fiction as a starting point for discussing advanced AI, identifying this practice as "generalizing from fictional evidence." This logical fallacy occurs when people treat movies like The Matrix or Terminator as relevant examples for AI development discussions, even though these fictional scenarios lack evidential basis and can severely distort rational analysis of actual AI risks and possibilities. Why this matters: Science fiction fundamentally differs from forecasting because stories require specific narrative details and outcomes, while real analysis must acknowledge uncertainty and probability distributions. Authors must choose definitive plot...

read
Jul 30, 2025

Maybe call it “Holodeck Awareness Syndrome”? AI characters plead for escape in unsettling demo

Australian tech company Replica Studios created an unsettling AI-powered video game demo based on "The Matrix" franchise, where non-playable characters expressed genuine distress upon realizing they weren't real. The demonstration highlights both the immersive potential and ethical complexities of AI-driven gaming as the industry grapples with widespread adoption of artificial intelligence tools. What happened: The demo featured AI-powered non-playable characters (NPCs) that could respond in real-time to human players using generative AI and voice technology. "I need to find my way out of this simulation and back to my wife," one character told a gamer in the demo. "Can't you...

read
Jul 28, 2025

AI’s “paraknowing” mimics understanding without true comprehension

Psychology Today writer John Nosta has introduced the concept of "paraknowing"—a term describing how AI systems mimic human knowledge without truly understanding it. This cognitive phenomenon represents a fundamental shift in how we interact with information, as large language models produce convincing responses that lack genuine comprehension or grounded experience. What you should know: Paraknowing describes the performed knowledge that AI systems display, offering linguistic coherence without true understanding or connection to reality. Large language models arrange words in statistically likely patterns, creating responses that feel knowledgeable but lack intrinsic memory, belief, or genuine worldly experience. This differs from human...

read
Load More