×
AI’s Persuasive Power: Balancing Public Health Benefits with Manipulation Risks
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI’s CEO Sam Altman is promoting AI’s potential to positively influence public health by persuading people to make healthier choices, while the company also grapples with the risks of increasingly persuasive AI models.

Key takeaways: OpenAI is actively researching the persuasive capabilities of AI and the potential risks associated with this power:

  • Altman co-authored an article highlighting Thrive AI, a startup aiming to use AI to nudge people towards healthier behaviors through personalized recommendations.
  • OpenAI’s Preparedness team, led by Aleksander Madry, is studying the extent to which AI models can be used to persuade people, as well as the associated risks.

The double-edged nature of persuasive AI: While AI’s persuasive abilities could be harnessed for positive outcomes, they also pose significant risks:

  • Language models have become increasingly persuasive as they have grown in size and sophistication, with research suggesting that AI-generated arguments can effectively change people’s opinions.
  • Persuasive AI could be misused to enhance the resonance of misinformation, generate compelling phishing scams, or advertise products in a manipulative manner.

Studying the long-term effects: OpenAI and other researchers have yet to fully explore the potential impact of AI programs that interact with users over extended periods:

  • Chatbots that roleplay as romantic partners or other characters are becoming increasingly popular, but the addictive and persuasive nature of these bots remains largely unknown.
  • Understanding the long-term effects of persuasive AI will be crucial in developing appropriate safeguards and regulations.

Broader implications: As policymakers grapple with the risks posed by advanced AI, it is essential to focus on the subtle dangers of persuasive algorithms rather than solely hypothetical existential threats:

  • The excitement generated by ChatGPT has led many to focus on the question of whether AI could someday turn against its creators.
  • However, the more immediate risks associated with AI’s persuasive capabilities, such as the potential for misuse and manipulation, require careful consideration and proactive regulation to ensure responsible development and deployment of these technologies.
OpenAI Is Testing Its Powers of Persuasion

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.