The growing prevalence of artificial intelligence systems has sparked a public backlash, leading to calls for greater transparency and control over how AI technologies interact with personal data and daily life.
Current landscape: Public sentiment toward artificial intelligence has shifted significantly toward skepticism and concern, particularly regarding unauthorized use of personal data.
- The New York Times initiated legal action against OpenAI and Microsoft over copyright infringement in December 2023
- Nvidia faces a class action lawsuit from authors concerning alleged unauthorized use of copyrighted materials for AI training
- Actress Scarlett Johansson confronted OpenAI over the similarity between their ChatGPT voice model and her own voice
- Pew Research indicates that over 50% of Americans express more concern than enthusiasm about AI, with similar sentiment reflected globally
Emerging solutions: Red teaming, a security testing approach borrowed from military and cybersecurity sectors, is gaining traction as a method to evaluate AI systems.
- DLA Piper law firm employs red teaming with lawyers to verify AI compliance with legal frameworks
- Humane Intelligence conducts large-scale red teaming exercises to test AI systems for discrimination and bias
- A White House-supported initiative in 2023 involved 2,200 participants in red teaming exercises
- Future testing will focus on specific issues like Islamophobia and online harassment against women
The right to repair concept: A new framework is emerging that would give users greater control over AI systems they interact with.
- Users could potentially run diagnostics on AI systems and track resolution of reported issues
- Ethical hackers and third-party groups could develop accessible fixes for AI-related problems
- Independent accredited evaluators could customize AI systems for specific use cases
- This approach would help balance the current power dynamic between AI companies and users
Implementation challenges: The path toward establishing AI right to repair faces several obstacles.
- Current industry practice involves deploying untested AI models directly into real-world applications
- Companies often prioritize rapid deployment over thorough testing and verification
- Limited transparency exists regarding how AI systems make decisions or use personal data
- Existing regulatory frameworks may need significant updates to accommodate these new rights
Looking ahead: The movement toward greater AI accountability and user control represents a pivotal shift in the relationship between technology companies and the public, though significant work remains to establish effective oversight mechanisms and user protections.
We Need a New Right to Repair for Artificial Intelligence