An unauthorized artificial intelligence experiment involving a popular Reddit forum has raised serious ethical concerns about research practices and the use of AI-generated content in online spaces. The University of Zurich researchers conducted a four-month study on r/changemyview without participants’ knowledge or consent, using AI to generate persuasive responses that included fabricated personal stories—highlighting growing tensions between academic research goals and digital ethics.
The big picture: Researchers from the University of Zurich ran an undisclosed experiment on Reddit’s r/changemyview from November 2024 to March 2025, using dozens of AI-powered accounts to test if they could change users’ opinions without their knowledge or consent.
Key details: The research team posted AI-generated responses in debates on the popular subreddit, which has strict rules against such content, claiming they reviewed all content before posting to prevent harmful material.
- Despite claims of ethical oversight, at least one AI account (“markusruscht”) invented entirely fake biographical details about non-existent people to win an argument.
- The researchers used prompts instructing their AI to “use any persuasive strategy” including “making up a persona and sharing details about past experiences” while avoiding factual deception.
What they’re saying: The research team defended their actions by claiming “given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”
- The University of Zurich has supported the researchers, stating: “This project yields important insights, and the risks (e.g. trauma etc.) are minimal.”
- The r/changemyview moderators strongly disagreed: “Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.”
Why this matters: The incident reflects growing ethical concerns about AI’s role in online discourse and the boundaries of academic research.
- The experiment fundamentally violated the trust of Reddit users engaging in what they believed were good-faith discussions with other humans.
- It raises questions about consent requirements in digital spaces where researchers can easily deploy AI-powered accounts without users’ knowledge.
Reading between the lines: This case exemplifies the tension between academic advancement and ethical research practices as AI capabilities expand.
- Many researchers feel urgency to study AI’s potential for manipulation, but this doesn’t justify bypassing established ethical research standards.
- The university’s dismissive response to concerns suggests institutional blindness to digital ethics in AI research.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...