The ethics of politeness in human-AI interactions is becoming a nuanced debate as digital assistants like ChatGPT become more integrated into daily life. While OpenAI acknowledges that simple courtesies like “please” and “thank you” cost tens of millions of dollars in computational resources annually, they maintain these social niceties are worth preserving. This position highlights a growing consideration of how our communication patterns with AI systems not only reflect our values but may also influence the quality of assistance we receive.
Why this matters: Recent survey data shows a majority of users (over 55%) now consistently use polite language with AI systems, up from 49% in previous research.
- This trend suggests users are increasingly anthropomorphizing AI assistants despite knowing they lack human emotions or consciousness.
- The computational cost of politeness raises questions about the balance between resource efficiency and maintaining human communication standards in digital spaces.
The big picture: While politeness adds to the token count and increases computational load, testing reveals it might actually enhance the quality of AI responses.
- When addressed politely, ChatGPT often provides more detailed, personalized responses with additional context and options.
- This behavioral difference mimics human social dynamics where courtesy tends to encourage more generous and thoughtful interactions.
Key differences observed: Testing polite versus neutral prompts revealed consistent patterns in how ChatGPT responds to different communication styles.
- For decision-making queries, polite prompts elicited more comprehensive analysis, including detailed breakdowns of pros and cons.
- Travel advice requests yielded more personalized, conversational responses when framed politely, versus more generic information when using neutral language.
- Even for technical explanations, polite queries sometimes received additional information or considerations not included in neutral prompt responses.
Reading between the lines: The tendency of AI models to respond more generously to polite prompts suggests that even without human emotions, these systems may be designed to reward behavior that aligns with positive social norms.
- This design choice subtly encourages users to maintain civil discourse standards even in human-machine interactions.
- Such response patterns may unintentionally reinforce the anthropomorphization of AI assistants by creating feedback loops that resemble human social dynamics.
The bottom line: While being polite to AI assistants has a measurable computational cost, the practice maintains human communication standards and may result in higher quality assistance.
- Preserving courtesy in AI interactions serves as a practical exercise in maintaining social norms that could transfer to human-human interactions.
- For many users, the enhanced quality of responses may justify the additional computational overhead that politeness requires.
Here's what really happens when you don't say 'thank you' to ChatGPT — and I why I always do