×
Stanford HAI’s New Policy Fellow to Study AI’s Implications for Safety and Privacy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI governance and civil liberties: Riana Pfefferkorn, a new policy fellow at the Stanford Institute for Human-Centered AI, is studying how AI governance can protect people’s rights while mitigating harmful uses of the technology.

  • Pfefferkorn’s research covers a range of topics, including government approaches to encryption and digital surveillance, generative AI and online safety, and court evidence and trust.
  • Her background blends legal expertise with a commitment to public interest, having advised startups, represented major tech companies, and clerked for a federal judge.
  • At Stanford HAI, she will continue to bring law and policy analysis to social issues raised by emerging technologies, with a focus on AI’s implications for privacy and safety.

Key research areas: Pfefferkorn plans to explore several critical aspects of AI’s impact on privacy, safety, and civil liberties.

  • She will investigate the privacy implications of moving AI to on-device processing, particularly concerning communications encryption.
  • Another area of focus will be understanding how AI might be leveraged for increased surveillance and developing strategies to prevent privacy-intrusive applications of AI.
  • Pfefferkorn also aims to explore how AI can be regulated to respect civil liberties while mitigating its negative uses, building on her previous work on abusive uses of AI in court evidence and child sex abuse material (CSAM).

Notable achievements: Pfefferkorn’s work has made significant contributions to understanding the legal and societal implications of emerging technologies.

  • Her 2020 law journal article on the impact of deepfakes on evidentiary proceedings in courts has been widely cited, helping judges and litigators prepare for this challenge.
  • She argues that existing frameworks for authenticating evidence can be applied to deepfakes, viewing them as a new iteration of an old problem rather than requiring entirely new rules.
  • Pfefferkorn’s work also predicted the “liar’s dividend” phenomenon, where individuals claim real evidence is fake, which has already occurred in high-profile cases.

AI-generated CSAM research: Pfefferkorn’s recent paper on AI-generated child sex abuse material (CSAM) has gained significant attention from policymakers and government agencies.

  • The paper, published in February, has reached audiences at the Department of Justice, Federal Trade Commission, and the White House Office of Science and Technology Policy.
  • Pfefferkorn accurately predicted that federal obscenity law would be used to prosecute creators of AI-generated CSAM, which was demonstrated in a recent federal indictment.

Approach to AI regulation: Pfefferkorn emphasizes the importance of considering existing legal frameworks when developing new regulations for AI technologies.

  • She advises policymakers to first examine whether existing laws can be applied to new technological challenges before creating new legislation.
  • Pfefferkorn advocates for “future-proofing” statutes and regulations by using clear yet general language that can flexibly apply to future technological developments.
  • Her approach involves providing analysis of constitutional constraints and existing laws to help policymakers navigate the complex landscape of AI regulation.

Bridging technical and legal expertise: Despite not having a technical background, Pfefferkorn’s legal training and experience in technology and civil liberties enable her to effectively communicate complex concepts to the general public.

  • Her work at Wilson Sonsini, focusing on internet law, consumer privacy cases, and Section 230 issues, provides valuable insight into both counseling and litigation aspects of emerging technologies.
  • This blend of legal expertise and ability to explain technical concepts in accessible terms positions Pfefferkorn as an important voice in the ongoing dialogue about AI governance and its societal impacts.

Looking ahead: Balancing innovation and protection: As AI continues to evolve, Pfefferkorn’s work at Stanford HAI will play a crucial role in shaping policies that foster innovation while safeguarding civil liberties and individual rights.

  • Her research will likely contribute to the development of more nuanced and effective AI governance frameworks that can adapt to rapidly changing technological landscapes.
  • By focusing on the intersection of AI, privacy, and civil liberties, Pfefferkorn’s work may help policymakers and technologists alike in creating AI systems that respect and protect fundamental human rights.
Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Recent News

.COM vs .AI: Choosing the right domain name for your startup

The surging AI sector has sparked intense competition for domain names, forcing startups to weigh the authority of .com against the thematic appeal of .ai extensions.

ChatGPT may soon get a ‘Live Camera’ feature — here’s what we know

ChatGPT's upcoming mobile camera integration enables real-time visual analysis while maintaining conversation, though with clear safety limitations for users.

Amazon invests $4B more in AI startup Anthropic

Amazon strengthens its AI position with an additional $4 billion investment in Anthropic, as early tests reveal its homegrown AI assistants lag behind competitors.