California has passed Assembly Bill 2013, requiring generative AI developers to publicly disclose their training data starting January 1, 2026. The Generative Artificial Intelligence Training Data Transparency Act represents one of the most comprehensive U.S. rules on AI disclosure, potentially strengthening copyright lawsuits while raising compliance burdens for companies operating in the state.
What you should know: The law mandates detailed public disclosures about datasets used to train AI models, including sources, availability, size, and whether copyrighted or personal data are included.
- Developers must publish information on their websites about data sources, whether datasets are publicly available or proprietary, their size and type, and the time period during which data was collected.
- Bloomberg Law described AB 2013 as among the most comprehensive U.S. rules on AI disclosure, requiring companies to publish details about the data that trains their models.
- Compliance presents significant challenges, particularly for models that have evolved over time using data from diverse sources that may lack clear ownership records or licensing information.
Why this matters: The disclosure requirements could make it easier to trace which datasets were used in training, potentially strengthening claims from copyright holders in ongoing litigation.
- Generative AI firms are already navigating lawsuits alleging that models were trained on copyrighted works without permission.
- Researchers argue that transparency could provide a foundation for independent audits and risk assessments of AI systems.
- California’s regulatory approach often shapes national technology policy, from privacy rules to emissions standards, giving this law significance beyond state borders.
Industry pushback: Business and technology executives are expressing concerns about the law’s potential impact on innovation and development.
- According to The Wall Street Journal, executives warned the bill could have a “chilling effect” on development in California, with startups particularly exposed to compliance burdens.
- Some analysts argue California’s targeted strategy may prove more durable than broader regulatory approaches.
- Microsoft’s Chief Scientist Eric Horvitz offered a contrasting view, suggesting that oversight “done properly” can accelerate AI advances by encouraging responsible data use and building public trust.
The big picture: California’s law signals that AI transparency may transition from voluntary best practice to mandatory requirement across industries.
- The broader policy debate centers on whether transparency alone will be sufficient for AI governance.
- Colorado has delayed its AI act implementation to June 2026, while financial institutions are independently moving toward clearer safeguards and responsible scaling practices.
- If the disclosure requirements prove workable, other states could follow suit, potentially creating a national standard for AI transparency.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...