back
6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa
Get SIGNAL/NOISE in your inbox daily
TL;DR: AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things …
Recent Stories
Jan 13, 2026
Deploying AI agents is not your typical software launch – 7 lessons from the trenches
Top-level actions include giving agents the right amount of freedom and rethinking traditional ROI. Industry leaders share their own experiences.
Jan 13, 2026Meta’s VR layoffs, studio closures underscore Zuckerberg’s massive pivot to AI
Meta began laying off employees in its Reality Labs division focused on virtual reality and shut down several VR studios as it pushes resources towards AI.
Jan 13, 202611 things UC Berkeley AI experts are watching for in 2026
How will AI disrupt the labor market? What will deepfake videos mean for our understanding of truth? Are we in a bubble, and if so, will the bubble burst?