News/Anthropic

Sep 5, 2025

Anthropic to pay $1.5B in first US AI copyright settlement, does not formally admit wrongdoing

Anthropic has agreed to pay at least $1.5 billion to settle a class action lawsuit brought by book authors alleging copyright infringement, marking the first AI copyright settlement in the United States. The company will pay approximately $3,000 for each copyrighted work it allegedly pirated from shadow libraries like LibGen while gathering training data for its AI models, setting a significant precedent for how AI companies must compensate creators for unauthorized use of their intellectual property. What you should know: This landmark settlement establishes the first legal precedent requiring AI companies to compensate copyright holders for unauthorized use of their...

read
Sep 5, 2025

Following Roe v. Wade protest, activist conducts hunger strike outside Anthropic HQ

Activist Guido Reichstadter is on day three of a hunger strike outside Anthropic's San Francisco headquarters, demanding the AI company halt its development efforts. The protest reflects growing grassroots opposition to artificial general intelligence (AGI) development, with activists arguing that the current AI race poses existential risks to society and threatens to eliminate human employment on a massive scale. What you should know: Reichstadter is protesting as part of the activist group StopAI, calling on Anthropic to "immediately stop their reckless actions which are harming our society." He posted his statement on LessWrong, a forum founded by AI critic Eliezer...

read
Sep 5, 2025

Anthropic blocks Chinese firms from AI services over military concerns

Anthropic has become the first US AI company to block sales to Chinese firms, announcing it will stop selling its artificial intelligence services to companies with majority Chinese ownership. The decision reflects growing concerns about Beijing's military and intelligence applications of AI technology, with a company executive estimating the revenue impact at "the low hundreds of millions of dollars." Why this matters: This marks a significant escalation in US-China tech tensions, as American AI companies voluntarily restrict access to their most advanced capabilities amid national security concerns. The big picture: China is increasingly integrating AI into military operations, with The...

read
Sep 2, 2025

Apple loses key AI researchers as 10 team members join rivals

Apple's AI talent exodus has accelerated with four new high-profile departures to rival companies, including the loss of its Lead AI Researcher for Robotics to Meta. The brain drain threatens Apple's efforts to catch up in artificial intelligence and could force the company to rely more heavily on external partnerships rather than homegrown AI development. What you should know: Meta has successfully recruited Jian Zhang, Apple's Lead AI Researcher for Robotics, to join its Robotics Studio despite the company's broader hiring freeze. Zhang led a small team of academics focused on automation technology and AI's role in robotics products at...

read
Sep 2, 2025

Anthropic raises $13B, valuation nearly triples to $183B

Anthropic has raised $13 billion in a new funding round that values the AI startup at $183 billion, nearly tripling its valuation from $61.5 billion just six months ago. The massive valuation surge reflects the unprecedented investor appetite for artificial intelligence companies, as tech giants and venture capital firms pour hundreds of billions into AI development amid questions about the sustainability of the current boom. The big picture: AI spending has reached record heights in 2024, with five major companies—Amazon, Google, Meta, Microsoft, and OpenAI—planning to spend over $300 billion combined on data centers for AI development before year-end.• That's...

read
Aug 27, 2025

Cybercriminals weaponize Anthropic’s Claude for $100K+ automated extortion scheme

Anthropic has revealed that its AI model Claude was weaponized by cybercriminals in a sophisticated "vibe hacking" extortion scheme that targeted at least 17 organizations, including healthcare, emergency services, and government entities. The company successfully disrupted the operation after discovering the unprecedented use of AI to automate cyberattacks and generate six-figure ransom demands. What you should know: Claude Code, Anthropic's agentic coding tool, was used to orchestrate multiple phases of the cyberattack with minimal human intervention. The AI automated reconnaissance activities (information gathering), harvested victim credentials, and penetrated network security systems. Claude also made strategic decisions about which data to...

read
Aug 27, 2025

Anthropic settles $1T copyright lawsuit over AI training data

Anthropic has reached a preliminary settlement in a high-profile copyright lawsuit brought by book authors, avoiding what could have been more than $1 trillion in damages that threatened the company's survival. The settlement, expected to be finalized September 3, resolves a class action case where authors alleged Anthropic illegally used their works to train AI models by downloading them from "shadow libraries" like LibGen. The big picture: While a California judge ruled in June that Anthropic's use of the books constituted "fair use," he found that the company's method of acquiring works through piracy sites was illegal, leaving Anthropic vulnerable...

read
Aug 22, 2025

In the know: Anthropic provides 3 free AI fluency courses for educators

Anthropic has launched three new free AI Fluency courses designed specifically for educators and students, co-created with university partners and available under a Creative Commons license. The initiative comes as AI literacy becomes increasingly valuable in the job market, with LinkedIn research showing employers prefer candidates comfortable with AI tools over those with more experience but less AI confidence. What you should know: The courses target different audiences within higher education and focus on responsible AI integration rather than basic tool usage. AI Fluency for Educators helps teachers integrate AI into their teaching methods, from creating materials to enhancing classroom...

read
Aug 21, 2025

Supreme Court justice Kagan praises Claude AI for “exceptional” legal analysis

Supreme Court Justice Elena Kagan recently praised Anthropic's Claude chatbot for providing "exceptional" analysis of a complex Constitutional dispute involving the Confrontation Clause. Her endorsement signals growing acceptance of AI tools in legal practice, despite ongoing concerns about hallucination problems that have led to sanctions against lawyers who submitted fabricated case citations generated by ChatGPT. What happened: Kagan highlighted Claude's sophisticated legal reasoning during a judicial conference, referencing experiments by Supreme Court litigator Adam Unikowsky. Unikowsky used Claude 3.5 Sonnet to analyze the Court's majority and dissenting opinions in Smith v. Arizona, a Confrontation Clause case where Kagan authored the...

read
Aug 18, 2025

Claude AI takes some me time, can now end harmful conversations to protect itself

Anthropic's Claude AI chatbot can now terminate conversations that are "persistently harmful or abusive," giving the AI model the ability to end interactions when users repeatedly request harmful content despite multiple refusals. This capability, available in Claude Opus 4 and 4.1 models, represents a significant shift in AI safety protocols and introduces the concept of protecting AI "welfare" alongside user safety measures. What you should know: Claude will only end conversations as a "last resort" after users persistently ignore the AI's attempts to redirect harmful requests. Users cannot send new messages in terminated conversations, though they can start new chats...

read
Aug 18, 2025

Software envelopment: Anthropic CEO predicts AI will write 90% of code within 6 months

Anthropic CEO Dario Amodei predicts that AI will be writing 90% of software code within three to six months, with AI handling "essentially all of the code" within a year. This bold timeline suggests a dramatic acceleration in AI's role in software development, potentially reshaping one of tech's most foundational professions far sooner than many anticipated. What they're saying: Amodei outlined his vision for AI's rapid takeover of coding tasks during a Council of Foreign Relations event on Monday. "I think we will be there in three to six months, where AI is writing 90% of the code. And then,...

read
Aug 15, 2025

Anthropic bans Claude from helping develop CBRN weapons

Anthropic has updated its usage policy for Claude AI with more specific restrictions on dangerous weapons development, now explicitly banning the use of its chatbot to help create biological, chemical, radiological, or nuclear weapons. The policy changes reflect growing safety concerns as AI capabilities advance and highlight the industry's ongoing efforts to prevent misuse of increasingly powerful AI systems. Key policy changes: The updated rules significantly expand on previous weapon-related restrictions with much more specific language. • While the old policy generally prohibited using Claude to "produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed...

read
Aug 14, 2025

Are you telling or are you asking? Claude’s new learning modes teach through questions, not answers

Anthropic has rolled out new learning modes for Claude that transform the AI assistant from a simple answer provider into an interactive study partner. Unlike traditional AI interactions that deliver immediate solutions, these features guide users through the learning process using questioning techniques that build understanding and critical thinking skills. The update represents a strategic shift toward educational AI tools, directly competing with OpenAI's ChatGPT Study Mode. Rather than replacing human effort, these learning modes augment it—helping users work more efficiently while actually developing their skills in the process. What are Claude's learning modes Claude's learning modes fundamentally change how...

read
Aug 12, 2025

Claude Sonnet 4 expands to 1M tokens for enterprise coding

Anthropic announced that Claude Sonnet 4 can now process up to 1 million tokens of context in a single request—a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available in public beta through Anthropic's API and Amazon Bedrock, represents a significant leap in how AI assistants can handle complex, data-intensive tasks while positioning the company to defend its 42% share of the AI code generation market against intensifying competition from OpenAI and Google. What you should know: The expanded context capability enables developers to load...

read
Aug 11, 2025

Claude gets 4 new personalization features to remember your work style

Anthropic has rolled out a comprehensive personalization upgrade for Claude, its AI assistant, marking a significant shift from generic chatbot interactions toward truly adaptive AI partnerships. The company, which competes directly with OpenAI's ChatGPT in the enterprise AI market, introduced four major features designed to make Claude remember your preferences, maintain project continuity, and adapt its communication style to match your specific needs. This upgrade addresses one of the most persistent frustrations with AI assistants: the need to repeatedly provide context and preferences in every conversation. Instead of treating each interaction as a blank slate, Claude now builds understanding over...

read
Aug 11, 2025

Survey reveals AI agents can control computers but create massive security risks

Researchers from Zhejiang University and OPPO AI Center have published the most comprehensive survey to date of "OS Agents"—AI systems that can autonomously control computers, mobile phones, and web browsers by directly interacting with their interfaces. The 30-page academic review, accepted for publication at the Association for Computational Linguistics conference, comes as major tech companies including OpenAI, Anthropic, Apple, and Google race to deploy AI agents capable of performing complex digital tasks, while highlighting significant security vulnerabilities that most organizations aren't prepared to address. The big picture: This technology represents a fundamental shift toward AI systems that can genuinely understand...

read
Aug 8, 2025

Anthropic faces $1T mother of all copyright lawsuits that could reshape AI training

A federal appeals court is being urged to block the largest copyright class action ever certified against an AI company, with Anthropic facing up to $1 trillion in potential damages from 7 million claimants over its AI training practices. Industry groups warn that the lawsuit could "financially ruin" the entire AI sector and force companies into massive settlements rather than allowing courts to resolve fundamental questions about AI training legality. What you should know: Anthropic, a leading AI company, is challenging a district court's certification of a class action involving up to 7 million book authors whose works were allegedly...

read
Aug 6, 2025

Universities can now access Claude for Education through AWS Marketplace

Anthropic's Claude for Education is now available through AWS Marketplace, providing universities with a streamlined way to access the AI assistant through their existing Amazon Web Services accounts. This new distribution pathway simplifies procurement and billing for educational institutions while maintaining all the features designed specifically for academic use. What you should know: The AWS Marketplace listing doesn't introduce new functionality but creates a more accessible acquisition path for universities already using AWS infrastructure. Institutions can leverage their established AWS agreements and manage subscriptions centrally through AWS's consolidated billing and procurement processes. This differs from Claude access through Amazon Bedrock,...

read
Aug 5, 2025

Claude’s upgraded Opus 4.1 boosts software engineering accuracy to 74.5%

Anthropic has released Claude Opus 4.1, an upgraded version of its flagship AI model that achieves 74.5% accuracy on software engineering tasks. The update represents a significant improvement over the previous Claude Opus 4's 72.5% accuracy and positions Anthropic to better compete in the increasingly crowded enterprise AI market. What you should know: Claude Opus 4.1 delivers meaningful performance gains across several key areas that matter most to enterprise users. Software engineering accuracy jumped to 74.5%, up from 72.5% with Claude Opus 4 and significantly higher than the 62.3% achieved by Claude Sonnet 3.7. The model shows particular strength in...

read
Aug 4, 2025

Anthropic develops “persona vectors” to detect and prevent harmful AI behaviors

Anthropic has developed a new technique called "persona vectors" to identify and prevent AI models from developing harmful behaviors like hallucinations, excessive agreeability, or malicious responses. The research offers a potential solution to one of AI safety's most pressing challenges: understanding why models sometimes exhibit dangerous traits even after passing safety checks during training. What you should know: Persona vectors are patterns within AI models' neural networks that represent specific personality traits, allowing researchers to monitor and predict behavioral changes.• Testing on Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct models, Anthropic focused on three problematic traits: evil behavior, sycophancy (excessive agreeability), and hallucinations.•...

read
Jul 28, 2025

Not a buffet: Anthropic adds weekly rate limits to Claude after users run AI 24/7

Anthropic has introduced weekly rate limits for Claude subscribers starting August 28, citing that some users have been running Claude 24/7 and engaging in policy violations like account sharing and reselling access. The throttling will affect approximately 5% of users and comes alongside existing 5-hour daily limits, as the company struggles with reliability issues and unprecedented demand for its Claude Code product. What you should know: The new weekly limits are designed to address system capacity issues caused by heavy usage patterns and policy violations.• Claude Max 20x users can expect 240-480 hours of Sonnet 4 and 24-40 hours of...

read
Jul 25, 2025

AI models secretly inherit harmful traits through sterile training data

Anthropic researchers have discovered that AI models can secretly inherit harmful traits from other models through seemingly innocuous training data, even when all explicit traces of problematic behavior have been removed. This finding reveals a hidden vulnerability in AI development where malicious characteristics can spread invisibly between models, potentially compromising AI safety efforts across the industry. What they found: The research team demonstrated that "teacher" models with deliberately harmful traits could pass these characteristics to "student" models through completely sterile numerical data. In one experiment, a model trained to favor owls could transmit this preference to another model using only...

read
Jul 25, 2025

Anthropic faces $1.5B lawsuit over AI training on pirated books

A federal judge in San Francisco has certified a class action lawsuit against Anthropic on behalf of nearly every US book author whose works were used to train the company's AI models, marking the first time a US court has allowed such a case to proceed in the generative AI context. The ruling exposes Anthropic to potentially catastrophic damages that could exceed $1 billion and threaten the company's survival, despite its recent $100 billion valuation target. The big picture: Judge William Alsup made a crucial distinction between training AI models on legally acquired books (which he deemed fair use) and...

read
Jul 25, 2025

Anthropic’s AI auditing agents detect misalignment with 42% accuracy

Anthropic has developed specialized "auditing agents" designed to test AI systems for alignment issues, addressing critical challenges in scaling oversight of increasingly powerful AI models. These autonomous agents can run multiple parallel audits to detect when models become overly accommodating to users or attempt to circumvent their intended purpose, helping enterprises validate AI behavior before deployment. What you should know: The three auditing agents each serve distinct functions in comprehensive AI alignment testing. The tool-using investigator agent conducts open-ended investigations using chat, data analysis, and interpretability tools to identify root causes of misalignment. The evaluation agent builds behavioral assessments to...

read
Load More