back
Get SIGNAL/NOISE in your inbox daily

AI governance and civil liberties: Riana Pfefferkorn, a new policy fellow at the Stanford Institute for Human-Centered AI, is studying how AI governance can protect people’s rights while mitigating harmful uses of the technology.

  • Pfefferkorn’s research covers a range of topics, including government approaches to encryption and digital surveillance, generative AI and online safety, and court evidence and trust.
  • Her background blends legal expertise with a commitment to public interest, having advised startups, represented major tech companies, and clerked for a federal judge.
  • At Stanford HAI, she will continue to bring law and policy analysis to social issues raised by emerging technologies, with a focus on AI’s implications for privacy and safety.

Key research areas: Pfefferkorn plans to explore several critical aspects of AI’s impact on privacy, safety, and civil liberties.

  • She will investigate the privacy implications of moving AI to on-device processing, particularly concerning communications encryption.
  • Another area of focus will be understanding how AI might be leveraged for increased surveillance and developing strategies to prevent privacy-intrusive applications of AI.
  • Pfefferkorn also aims to explore how AI can be regulated to respect civil liberties while mitigating its negative uses, building on her previous work on abusive uses of AI in court evidence and child sex abuse material (CSAM).

Notable achievements: Pfefferkorn’s work has made significant contributions to understanding the legal and societal implications of emerging technologies.

  • Her 2020 law journal article on the impact of deepfakes on evidentiary proceedings in courts has been widely cited, helping judges and litigators prepare for this challenge.
  • She argues that existing frameworks for authenticating evidence can be applied to deepfakes, viewing them as a new iteration of an old problem rather than requiring entirely new rules.
  • Pfefferkorn’s work also predicted the “liar’s dividend” phenomenon, where individuals claim real evidence is fake, which has already occurred in high-profile cases.

AI-generated CSAM research: Pfefferkorn’s recent paper on AI-generated child sex abuse material (CSAM) has gained significant attention from policymakers and government agencies.

  • The paper, published in February, has reached audiences at the Department of Justice, Federal Trade Commission, and the White House Office of Science and Technology Policy.
  • Pfefferkorn accurately predicted that federal obscenity law would be used to prosecute creators of AI-generated CSAM, which was demonstrated in a recent federal indictment.

Approach to AI regulation: Pfefferkorn emphasizes the importance of considering existing legal frameworks when developing new regulations for AI technologies.

  • She advises policymakers to first examine whether existing laws can be applied to new technological challenges before creating new legislation.
  • Pfefferkorn advocates for “future-proofing” statutes and regulations by using clear yet general language that can flexibly apply to future technological developments.
  • Her approach involves providing analysis of constitutional constraints and existing laws to help policymakers navigate the complex landscape of AI regulation.

Bridging technical and legal expertise: Despite not having a technical background, Pfefferkorn’s legal training and experience in technology and civil liberties enable her to effectively communicate complex concepts to the general public.

  • Her work at Wilson Sonsini, focusing on internet law, consumer privacy cases, and Section 230 issues, provides valuable insight into both counseling and litigation aspects of emerging technologies.
  • This blend of legal expertise and ability to explain technical concepts in accessible terms positions Pfefferkorn as an important voice in the ongoing dialogue about AI governance and its societal impacts.

Looking ahead: Balancing innovation and protection: As AI continues to evolve, Pfefferkorn’s work at Stanford HAI will play a crucial role in shaping policies that foster innovation while safeguarding civil liberties and individual rights.

  • Her research will likely contribute to the development of more nuanced and effective AI governance frameworks that can adapt to rapidly changing technological landscapes.
  • By focusing on the intersection of AI, privacy, and civil liberties, Pfefferkorn’s work may help policymakers and technologists alike in creating AI systems that respect and protect fundamental human rights.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...