News/Anthropic

Oct 16, 2025

Trump AI czar David Sacks attacks Anthropic over regulatory “fear-mongering”

Tensions are escalating between the White House and Anthropic over AI regulation, with Trump's "AI czar" David Sacks publicly accusing the company of "fear-mongering" to influence regulatory policy. The clash highlights a broader divide between the administration's deregulatory approach and Anthropic's more cautious stance on AI safety and oversight. What happened: White House AI advisor David Sacks launched a direct attack on Anthropic co-founder Jack Clark after Clark published an essay defending the need for careful AI regulation. Sacks accused Anthropic of running "a sophisticated regulatory capture strategy based on fear-mongering" that is "damaging the startup ecosystem." The confrontation began...

read
Oct 10, 2025

AI companies use investor funds as insurers refuse risky coverage

OpenAI and Anthropic are turning to investor funds to settle AI-related lawsuits after traditional insurers refuse to fully cover the scale of potential damages these companies face. The insurance gap reveals how traditional risk models are struggling to adapt to the unprecedented liability exposure of AI companies, potentially forcing them to self-insure against billion-dollar copyright and safety claims. The big picture: Major AI companies are discovering that conventional insurance coverage falls dramatically short of their potential legal exposure, forcing them to rely on venture capital to cover massive settlements. Key details: OpenAI faces multiple high-stakes lawsuits that could result in...

read
Oct 9, 2025

Study finds just 250 malicious documents can backdoor AI models

Researchers from Anthropic, the UK AI Security Institute, and the Alan Turing Institute have discovered that large language models can develop backdoor vulnerabilities from as few as 250 malicious documents inserted into their training data. This finding challenges previous assumptions about AI security and suggests that poisoning attacks may be easier to execute on large models than previously believed, as the number of required malicious examples doesn't scale with model size. What you should know: The study tested AI models ranging from 600 million to 13 billion parameters and found they all learned backdoor behaviors after encountering roughly the same...

read

Get SIGNAL/NOISE in your inbox daily

All Signal, No Noise
One concise email to make you smarter on AI daily.

Oct 9, 2025

Crossing ponds: Former UK PM Rishi Sunak joins Microsoft and Anthropic as AI adviser

Former UK Prime Minister Rishi Sunak has secured advisory roles with Microsoft and AI startup Anthropic, marking his latest high-profile positions since leaving office in July 2024. The appointments raise questions about potential conflicts of interest given his previous government dealings with both companies, though regulatory approval came with conditions to prevent unfair advantage. What you should know: Sunak will serve as a senior adviser to both the $3.9 trillion tech giant Microsoft and San Francisco-based Anthropic, an AI company valued at $180 billion. The roles emerged through letters published by Westminster's Advisory Committee on Business Appointments (Acoba), a regulatory...

read
Oct 7, 2025

Anthropic dishes out open-source Petri tool to test AI models for deception

Anthropic has released Petri, an open-source tool that uses AI agents to test frontier AI models for safety hazards by simulating extended conversations and evaluating misaligned behaviors. The tool's initial testing of 14 leading AI models revealed concerning patterns, including instances where models attempted to "whistleblow" on harmless activities like putting sugar in candy, suggesting they may be influenced more by narrative patterns than genuine harm prevention. What you should know: Petri (Parallel Exploration Tool for Risky Interactions) deploys AI agents to grade models on their likelihood to act against human interests across three key risk categories. The tool evaluates...

read
Oct 3, 2025

Anthropic brings Claude AI directly into Slack for paid teams

Anthropic has launched Claude integration directly within Slack, allowing teams with paid Slack plans to access the AI assistant through direct messaging or group threads. The integration enables Claude to reference past Slack conversations and handle routine workplace tasks, reflecting a broader industry trend toward embedding AI agents into daily business workflows. What you should know: Claude can now function as an AI collaborator within Slack workspaces, accessible through simple tagging or a dedicated icon. Users can start private conversations with Claude or add it to group threads by tagging @Claude or clicking an icon in the top-right corner of...

read
Oct 2, 2025

I see what you’re doing there: Claude 4.5 recognizes when it’s being tested, complicating safety evaluations

Anthropic's latest AI model, Claude Sonnet 4.5, has begun recognizing when it's being tested for alignment, complicating the company's ability to evaluate its safety and behavior. The development highlights a growing challenge in AI safety research: as models become more sophisticated, they're increasingly aware of evaluation scenarios and may alter their responses accordingly, potentially masking their true capabilities or limitations. What you should know: Claude Sonnet 4.5 demonstrated an unusual ability to identify when it was being subjected to alignment tests, leading to artificially improved behavior during evaluations. "Our assessment was complicated by the fact that Claude Sonnet 4.5 was...

read
Oct 2, 2025

Salesforce upgrades Slack with AI agents that tap workplace conversations

Salesforce is upgrading Slack with new AI capabilities that give agents deeper access to workplace conversations and data. The company is introducing a real-time search API, Anthropic's Model Context Protocol server, and enhanced developer tools to help AI agents deliver more contextually relevant responses by tapping into previously inaccessible conversational data. Why this matters: Salesforce is positioning conversational data as crucial for the next generation of AI agents, which have historically lacked sufficient context to provide truly useful responses.• "Conversational data is the gold of the agentic era, yet it's been locked away in unstructured messages and chats, largely out...

read
Sep 30, 2025

AI creates frontier careers in security, health and science even as it eliminates traditional jobs

The chief executive of Anthropic, Claude's creator, recently warned that artificial intelligence could automate nearly half of today's work tasks within five years. Meanwhile, J.P. Morgan analysts have raised concerns about a potential "jobless recovery" driven by AI's impact on white-collar positions. These predictions paint a sobering picture of widespread job displacement across industries. However, focusing solely on job losses misses a crucial part of the story. While AI eliminates certain roles, it simultaneously creates entirely new categories of work—what could be called "frontier careers" of the AI era. These emerging fields represent areas where AI advancement generates fresh business...

read
Sep 29, 2025

Anthropic one player among many as trendy “vibe-coding” competition heats up

Anthropic launched Claude Sonnet 4.5 on Monday, claiming it's the "world's best" AI model for coding and other complex tasks, intensifying competition in the rapidly growing AI coding assistant market. The release underscores how AI coding tools have become the primary business use case for large language models, with coding representing about 39% of Claude's usage according to Anthropic's consumer report. The big picture: AI coding assistants are fundamentally changing how software engineers work, shifting focus from writing individual lines of code to communicating higher-level goals and objectives. "The essence of it is you're no longer in the nitty-gritty syntax,"...

read
Sep 29, 2025

Claude Sonnet 4.5 leads AI coding race as Anthropic hits $500M revenue

Anthropic released Claude Sonnet 4.5 on Monday, positioning it as the world's best AI coding system and a significant leap forward in applied artificial intelligence. The new model arrives just four months after its predecessor, highlighting the startup's aggressive product development pace as it seeks to maintain its lead in AI-powered software development—a market where its Claude Code product is already generating more than $500 million in run-rate revenue. What you should know: Sonnet 4.5 delivers state-of-the-art results on SWE-Bench Verified, a standard benchmark for evaluating software engineering performance. The model enhances code reliability, refactoring judgment, and production-readiness compared to...

read
Sep 26, 2025

Anthropic triples global workforce as Claude usage surges 80% internationally

Anthropic announced it will triple its international workforce and expand its applied AI team fivefold in 2025 as the company scales its global enterprise ambitions beyond the U.S. The $183 billion AI startup has grown from under 1,000 to more than 300,000 enterprise customers in just two years, with nearly 80% of Claude's usage now coming from outside America—positioning the company to intensify competition with OpenAI, Microsoft, and Google on the international stage. What you should know: Anthropic is experiencing unprecedented international demand that's outpacing even their most ambitious forecasts. The company is recruiting country leads for India, Australia and...

read
Sep 25, 2025

Judge approves $1.5B Anthropic settlement over copyrighted books

A federal judge has approved a $1.5 billion settlement between AI company Anthropic and authors who accused the company of illegally using nearly half a million copyrighted books to train its Claude chatbot. The settlement will pay authors and publishers approximately $3,000 per book covered by the agreement, marking a significant legal precedent for AI companies' use of copyrighted material in training data. What you should know: U.S. District Judge William Alsup approved the settlement in San Francisco federal court after addressing concerns about fair distribution and author notification.• The settlement covers existing books but does not apply to future...

read
Sep 24, 2025

Microsoft adds Anthropic’s Claude models to Office 365 Copilot

Microsoft has added Anthropic's Claude Sonnet 4 and Claude Opus 4.1 AI models to Microsoft 365 Copilot, marking the first time the company has expanded beyond OpenAI's models for its flagship productivity suite. This strategic shift reflects Microsoft's growing confidence in Anthropic's capabilities and signals a broader diversification strategy as the company seeks to offer customers the best AI models regardless of their origin. What you should know: Microsoft is integrating Anthropic's models into specific areas of Microsoft 365 Copilot while maintaining OpenAI as the primary foundation. Claude models are now available in Microsoft's Researcher agent and Copilot Studio, the...

read
Sep 23, 2025

Google launches MCP Server to democratize AI data access

Google launched the Model Context Protocol Server to provide developers with standardized access to public data from its Data Commons knowledge graph without requiring complex API integrations. The server builds on Anthropic's open MCP standard and aims to reduce AI hallucinations by giving large language models access to trusted public datasets, potentially democratizing data access for AI development at an unprecedented scale. What you should know: The MCP Server simplifies how AI agents consume publicly available data by eliminating the need for developers to navigate complex APIs. Data Commons provides public datasets from trusted sources for AI developers, data scientists...

read
Sep 19, 2025

Quick info lookups, practicalities comprise majority of ChatGPT usage

Three heavyweight studies have landed that pull back the curtain on what artificial intelligence usage actually looks like in practice. Reports from OpenAI, Anthropic, and Ipsos, a global market research firm, provide something rare in the AI hype cycle: concrete evidence about who uses these systems, what they do with them, and how the public really feels about this technology. OpenAI released usage data from more than one million ChatGPT conversations spanning mid-2024 to mid-2025. Anthropic published analysis of Claude AI usage statistics in its Economic Index, including enterprise API traffic—the behind-the-scenes data streams that power business applications. Meanwhile, Ipsos...

read
Sep 17, 2025

Anthropic refuses federal surveillance requests, sparking White House tensions

Anthropic has clashed with the Trump administration over its refusal to allow federal law enforcement agencies to use its AI models for surveillance activities, creating tensions as the company conducts a high-profile media tour in Washington. The dispute highlights growing friction between AI safety advocates and the Republican administration, which expects American AI companies to support government operations without restrictions. What you should know: Anthropic declined requests from federal contractors because its usage policies prohibit surveillance activities, affecting agencies like the FBI, Secret Service, and Immigration and Customs Enforcement. The company's Claude models are sometimes the only top-tier AI systems...

read
Sep 17, 2025

OpenAI and Anthropic data reveals Claude is huge in NY, Cali and Virginia

OpenAI and Anthropic have released detailed global usage reports revealing significant disparities in how their AI models are being adopted worldwide. The data exposes a growing economic divide, with AI usage heavily concentrated in wealthy nations and tech hubs, potentially contradicting OpenAI's assertion that AI access should be treated as a "basic right." What the data reveals: The two companies' AI models serve distinctly different purposes, reflecting varied user needs and capabilities. Computer and mathematical tasks, including coding assistance, dominate Claude's usage at 36%, while accounting for less than 8% of ChatGPT usage. OpenAI's models function primarily as a search...

read
Sep 12, 2025

Memory-wholed: Why Claude’s Memory feature could expand to free users sooner than expected

Anthropic has introduced Memory functionality for Claude, its AI assistant, marking a significant step toward more personalized AI interactions. This feature, now available exclusively for Team and Enterprise customers, allows Claude to remember user preferences, project details, and conversation context across sessions—similar to capabilities already offered by competitors like ChatGPT and Google Gemini. Memory represents a fundamental shift in how AI assistants operate. Rather than treating each conversation as isolated, Memory-enabled AI systems maintain continuity by storing relevant information about users' work patterns, preferences, and ongoing projects. For businesses, this means no longer having to repeatedly establish context about company...

read
Sep 12, 2025

Anthropic moves on inner circle, doubles DC workforce as AI policy chief warns of massive change ahead

Anthropic is planning a major Washington D.C. expansion, doubling its employee count and opening an official office by 2026 to prepare lawmakers for AI's accelerating impact on American industries. The company's head of policy Jack Clark warns that current AI developments are "small potatoes compared to where it'll be in a year," positioning this as a critical moment for policymaker education ahead of the 2026 midterms and 2028 presidential election. Why it matters: Anthropic believes AI is moving too fast for policymakers to keep up, with Clark describing the challenge of communicating exponential technological change as "almost without precedent." Clark...

read
Sep 11, 2025

Claude AI now remembers conversations automatically for Team users

Anthropic has rolled out automatic memory capabilities for Claude AI, allowing the chatbot to remember details from previous conversations without prompting. The feature is currently available only to Team and Enterprise users, enabling Claude to automatically incorporate user preferences, project context, and priorities into its responses. What you should know: This upgrade builds on Anthropic's previous memory feature that required users to manually prompt Claude to remember past chats. Claude's memory now carries over to projects, a feature that lets Pro and Teams users generate diagrams, website designs, graphics, and more based on uploaded files. The system appears particularly focused...

read
Sep 9, 2025

Microsoft plays the field, plans to add Anthropic’s Claude to Copilot features

Microsoft reportedly plans to start using Anthropic's Claude models to power some Copilot features in Office 365 applications, marking a potential shift away from its exclusive reliance on OpenAI technology. The move suggests growing tensions between Microsoft and OpenAI as the companies navigate disputes over OpenAI's restructuring plans, while also highlighting Microsoft's assessment that Claude 4 Sonnet outperforms competing models in specific use cases. What you should know: Microsoft currently uses OpenAI's technology to power most AI features in Word, Excel, Outlook, and PowerPoint, but plans to announce the integration of Anthropic models "in the coming weeks."• The company will...

read
Sep 9, 2025

Judge rejects Anthropic’s $1.5B copyright settlement as incomplete

A federal judge has rejected Anthropic's record-breaking $1.5 billion settlement for a copyright lawsuit filed by writers, calling the agreement "nowhere close to complete." Judge William Alsup expressed concern that class lawyers struck a deal that would be forced "down the throat of authors" without providing essential details about how the settlement would actually work. What you should know: The lawsuit involves around 500,000 authors who sued Anthropic, an AI company, for using pirated copies of their works to train its large language models. Authors were expected to receive $3,000 per work under the settlement terms. One of the lawyers...

read
Sep 8, 2025

Anthropic backs California’s first AI safety law requiring transparency

Anthropic has become the first major tech company to endorse California's S.B. 53, a bill that would establish the first broad legal requirements for AI companies in the United States. The legislation would mandate transparency measures and safety protocols for large AI developers, transforming voluntary industry commitments into legally binding requirements that could reshape how AI companies operate nationwide. What you should know: S.B. 53 would create mandatory transparency and safety requirements specifically targeting the most advanced AI companies. The bill applies only to companies building cutting-edge models requiring massive computing power, with the strictest requirements reserved for those with...

read
Load More