back
Get SIGNAL/NOISE in your inbox daily
Recent work from Anthropic and others claims that LLMs’ chains of thoughts can be “unfaithful”. These papers make an important point: you can’t take everything in the CoT at face value. As a result, people often use these results to conclude the CoT is useless for analyzing and monitoring AIs. Here, instead of asking whether the CoT always contains all information relevant to a model’s decision-making in all problems, we ask if it contains enough information to allow developers to monitor models in practice. Our experiments suggest that it might.
Recent Stories
Jan 18, 2026
How to Use AI for Contract Review Successfully
Learn how to deploy AI for contract review with playbooks, security checks, and workflow integration to speed reviews without added risk.
Jan 18, 2026OpenHands: An Open Platform for AI Software Developers as Generalist Agents
Software is one of the most powerful tools that we humans have at our disposal; it allows a skilled programmer to interact with the world in complex and profound ways. At the same time, thanks to...
Jan 18, 2026Artificial Intelligence (AI) Infrastructure Spending Is Rising. This Stock Could Benefit.
Rolls-Royce is set to be a leading provider of electricity for AI data centers.