The AI alignment debate has long focused on technical solutions while potentially overlooking the broader societal mechanisms that shape technology adoption and impact. This perspective challenges the current approach to AI alignment by suggesting that external selection processes—how society chooses to adopt, regulate, and integrate AI—may ultimately prove more influential than internal technical solutions alone.
The big picture: The author critiques the narrow technical focus of AI alignment efforts by comparing them to other technologies that society successfully guides through distributed decision-making rather than purely technical solutions.
- The Wikipedia definition of AI alignment—steering AI systems toward intended goals, preferences, or ethical principles—could equally apply to automobiles, pharmaceuticals, or education, yet these don’t have dedicated “alignment” fields.
- While technical AI alignment problems remain important, they represent just a small portion of how society ultimately shapes technology adoption and use.
Key distinction: AI alignment discussions have developed a predominantly technical orientation despite the concept applying broadly to how any technology serves human values.
- The AI Alignment Forum features “more math than Confucius or Foucault,” indicating a preference for technical rather than philosophical or social approaches.
- This contrasts with how society approaches other technologies, where ethical considerations occur largely outside laboratories through purchasing decisions, regulations, and public discourse.
The author’s alternative: The concept of “Selection”—borrowed from evolutionary terminology—describes how society collectively shapes technology through decentralized processes of adoption, regulation, and discussion.
- Selection represents “the sum total of the wills of the masses” as they determine which technologies fill which niches in society.
- This distributed decision-making process is characterized as potentially more important than technical alignment work because it encompasses how AI touches almost everyone despite being a relatively small part of the economy.
Why this matters: The author argues that improving “Selection efficiency” represents the truly significant work of AI alignment that’s currently being overlooked.
- Focusing exclusively on technical solutions while ignoring societal selection mechanisms potentially neglects the most powerful tools for steering AI development.
- Rejecting the importance of selection would mean “giving up on humanity,” as it dismisses the potential for collective action to shape technology for ethical outcomes.
Problems in AI alignment: A scale model