×
AI’s Persuasive Power: Balancing Public Health Benefits with Manipulation Risks
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI’s CEO Sam Altman is promoting AI’s potential to positively influence public health by persuading people to make healthier choices, while the company also grapples with the risks of increasingly persuasive AI models.

Key takeaways: OpenAI is actively researching the persuasive capabilities of AI and the potential risks associated with this power:

  • Altman co-authored an article highlighting Thrive AI, a startup aiming to use AI to nudge people towards healthier behaviors through personalized recommendations.
  • OpenAI’s Preparedness team, led by Aleksander Madry, is studying the extent to which AI models can be used to persuade people, as well as the associated risks.

The double-edged nature of persuasive AI: While AI’s persuasive abilities could be harnessed for positive outcomes, they also pose significant risks:

  • Language models have become increasingly persuasive as they have grown in size and sophistication, with research suggesting that AI-generated arguments can effectively change people’s opinions.
  • Persuasive AI could be misused to enhance the resonance of misinformation, generate compelling phishing scams, or advertise products in a manipulative manner.

Studying the long-term effects: OpenAI and other researchers have yet to fully explore the potential impact of AI programs that interact with users over extended periods:

  • Chatbots that roleplay as romantic partners or other characters are becoming increasingly popular, but the addictive and persuasive nature of these bots remains largely unknown.
  • Understanding the long-term effects of persuasive AI will be crucial in developing appropriate safeguards and regulations.

Broader implications: As policymakers grapple with the risks posed by advanced AI, it is essential to focus on the subtle dangers of persuasive algorithms rather than solely hypothetical existential threats:

  • The excitement generated by ChatGPT has led many to focus on the question of whether AI could someday turn against its creators.
  • However, the more immediate risks associated with AI’s persuasive capabilities, such as the potential for misuse and manipulation, require careful consideration and proactive regulation to ensure responsible development and deployment of these technologies.
OpenAI Is Testing Its Powers of Persuasion

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.