Tech industry’s audacious attitude and actions regarding generative AI are cause for concern among many, who claim that a small group of Silicon Valley leaders believe they are on the cusp of creating an artificial superintelligence that will radically reshape society.
Here are those opponents views…
Key takeaways:
- AI companies are brushing aside problems such as copyright infringement, job displacement, and the spread of low-quality content, while embracing a manifest-destiny attitude toward their technologies.
- Some AI leaders are expressing paternalistic views about the future, suggesting that those who don’t embrace their technology will be left behind.
- There are material concerns, such as the environmental impact of AI development and the audacious use of third-party tools to harvest information without permission.
Rhetoric and power dynamics:
- OpenAI’s CTO suggested that some creative and repetitive jobs that can be replaced by AI shouldn’t have existed in the first place, while CEO Sam Altman has implied that the “median human” worker’s labor might be sacrificed for progress.
- A former OpenAI employee’s manifesto articulates how the architects of this technology see themselves: a small, intellectually superior group bound together to shape the future, with little input from the rest of society.
Audacity as a liability: While audacity can be a positive force for progress, it becomes a liability when builders become untethered from reality or believe they have the right to impose their values on others:
- The apocalyptic visions and looming superintelligence narratives are hypotheticals, not facts, and the actual state of the AI industry in 2024 is one of entitled audacity.
- AI companies wish to be left alone to “scale in peace,” commandeering creative resources and eminent-domaining the entire internet, while expecting trust in their ability to build and implement these tools safely and responsibly.
Broader implications: generative AI’s chief export may not be its outputs but rather the unearned, entitled audacity it has produced in the people who build these systems:
- This audacity is yet another example of AI producing hallucinations—not in the machines themselves, but in the minds of their creators.
- The concentration of power and influence among a small group of AI leaders, coupled with their grandiose visions for the future, raises concerns about the societal impact of this technology and the need for broader input and oversight.
Silicon Valley’s ‘Audacity Crisis’