News/Research

Jul 11, 2025

MIT’s CellLENS AI maps immune cell behavior to advance cancer treatment

MIT researchers have developed CellLENS (Cell Local Environment and Neighborhood Scan), a new AI system that reveals hidden cell subtypes by analyzing molecular, spatial, and morphological data simultaneously. The deep learning tool promises to advance precision medicine by enabling scientists to identify rare immune cell subtypes and understand how their location and activity relate to disease processes, particularly in cancer immunotherapy. What you should know: CellLENS combines convolutional neural networks and graph neural networks to create comprehensive digital profiles for individual cells within tissues. The system analyzes RNA or protein molecules, spatial location, and microscopic appearance simultaneously—traditionally examined separately by...

read
Jul 11, 2025

City of Hope’s custom AI model saves clinicians thousands of hours

City of Hope has launched HopeLLM, its own proprietary generative AI model designed specifically for cancer care, after finding no commercial AI solutions that met its complex oncology needs. The tool has already saved clinicians "thousands" of hours in its first week of deployment and has attracted interest from pharmaceutical companies seeking to leverage its clinical trial-matching capabilities. What you should know: HopeLLM addresses the unique challenges of cancer care by processing vast, complex medical records that can span decades of treatment history. Cancer patients typically have electronic health records containing 10-20 years of test results and visit notes, with...

read
Jul 11, 2025

Berkeley study finds AI tools slow down developers by 19%

A new study by Berkeley-based AI benchmarking nonprofit Metr found that experienced developers who used AI tools to complete coding tasks actually took 19% longer than those who didn't use AI assistance. The finding challenges widespread assumptions about AI's productivity benefits and suggests that organizations may be overestimating the efficiency gains from AI tools in skilled professional work. The big picture: While developers predicted AI would speed up their work by 24% before starting and 20% after completing tasks, objective data showed the opposite effect occurred. Key study details: Metr's research focused on experienced open-source developers working on large, complex...

read
Jul 11, 2025

Agentic or pathetic? Gartner warns of “agent washing” as only 130 AI products truly behave agentically

Gartner analysts have identified a new deceptive practice called "agent washing," where vendors falsely market basic automation tools and chatbots as advanced AI agents. Out of thousands of supposedly agentic AI products tested, only 130 genuinely possessed the autonomous capabilities they claimed, highlighting a widespread misrepresentation that threatens to undermine trust in AI innovation. What you should know: True AI agents differ fundamentally from standard automation tools by their ability to reason, plan, and execute complex tasks with minimal human intervention. Genuine agentic AI can complete multi-step processes, interface with external systems, and adapt to new situations without pre-programmed instructions....

read
Jul 10, 2025

Apple’s AI model detects health conditions with 92% accuracy using behavior data

Apple researchers have developed a new AI model called WBM (Wearable Behavior Model) that can detect health conditions with up to 92% accuracy by analyzing behavioral data from wearables rather than raw sensor readings. The breakthrough suggests that movement patterns, sleep habits, and exercise data may be more reliable health indicators than traditional biometric measurements like heart rate or blood oxygen levels. What you should know: The WBM model was trained on over 2.5 billion hours of data from Apple Watch and iPhone users, focusing on 27 behavioral metrics rather than raw sensor streams. The model analyzes higher-level behavioral patterns...

read
Jul 10, 2025

Study: AI mental health chatbots give dangerous advice 50% of the time

The rise of artificial intelligence in mental health care presents both unprecedented opportunities and significant risks. While AI chatbots could help address the massive shortage of mental health professionals, recent research reveals these systems often provide dangerous advice when handling sensitive psychological issues. A concerning pattern is emerging: people are increasingly turning to AI for mental health support without understanding the serious limitations of these tools. Nearly 50% of survey respondents have used large language models (LLMs)—the AI systems that power chatbots like ChatGPT—for mental health purposes, according to research by Rousmaniere and colleagues. While close to 40% found them...

read
Jul 10, 2025

Study: AI coding tools slow down experienced developers by 19%

A new study by AI research nonprofit METR has found that artificial intelligence coding tools actually slowed down experienced software developers by 19% when working on familiar codebases, contrary to the developers' expectations of a 24% speed improvement. The findings challenge widespread assumptions about AI's productivity benefits for skilled engineers and raise questions about the substantial investment flowing into AI-powered development tools. What you should know: The study tracked seasoned developers using Cursor, a popular AI coding assistant, on open-source projects they knew well. Before the study, developers expected AI to decrease their task completion time by 24%. Even after...

read
Jul 9, 2025

MIT breakthrough boosts AI reasoning accuracy by 6x with test-time training

MIT researchers have developed a breakthrough training technique that can boost large language models' accuracy on complex reasoning tasks by up to sixfold. The method, called test-time training, temporarily updates a model's parameters during deployment to help it adapt to challenging new problems that require strategic planning, logical deduction, or process optimization. What you should know: Test-time training represents a significant advance over traditional in-context learning by actually updating model parameters rather than just providing examples. The technique involves temporarily modifying some of a model's internal variables using task-specific data, then reverting the model to its original state after making...

read
Jul 9, 2025

Researchers use LLMs to pilot spacecraft with natural language commands

AI researchers have demonstrated how large language models like GPT-3.5 and LLaMA can be deployed to help humans pilot spacecraft in real-time through natural language commands. The breakthrough, detailed in a paper submitted to MIT's Kerbal Space Program Differential Game competition, represents what researchers call the first integration of LLM agents into space research and offers a glimpse of AI-assisted spacefaring becoming practical reality. How it works: The system operates entirely through natural language prompts, allowing human pilots to communicate with spacecraft using simple text commands. A ground-based pilot might instruct the system not to "apply rotation throttles" when a...

read
Jul 9, 2025

FlexOlmo architecture lets data owners remove content from trained AI models

The Allen Institute for AI has developed FlexOlmo, a new large language model architecture that allows data owners to remove their contributions from an AI model even after training is complete. This breakthrough challenges the current industry practice where data becomes permanently embedded in models, potentially reshaping how AI companies access and use training data while giving content creators unprecedented control over their intellectual property. How it works: FlexOlmo uses a "mixture of experts" architecture that divides training into independent, modular components that can be combined or removed later. Data owners first copy a publicly shared "anchor" model, then train...

read
Jul 9, 2025

ChatGPT and Gemini develop unique writing styles similar to humans

New research reveals that popular AI chatbots like ChatGPT and Gemini have developed distinct writing styles, or "idiolects," that can be identified through linguistic analysis. This finding challenges assumptions about AI uniformity and has significant implications for detecting AI-generated content in educational settings and forensic applications. What you should know: Linguist Karolina Rudnicka used computational methods to analyze hundreds of texts about diabetes generated by ChatGPT and Gemini, finding clear stylistic differences between the models. The Delta method, a standard authorship attribution technique, showed ChatGPT texts had a linguistic distance of 0.92 to other ChatGPT content and 1.49 to Gemini...

read
Jul 7, 2025

Researchers from 14 universities caught hiding AI prompts in academic papers

Researchers from 14 universities across eight countries have been caught embedding hidden AI prompts in academic papers designed to manipulate artificial intelligence tools into giving positive reviews. The discovery, found in 17 preprints on arXiv (a platform for sharing research papers before formal peer review), highlights growing concerns about AI's role in peer review and the lengths some academics will go to game the system. What you should know: The hidden prompts were strategically concealed using white text and microscopic fonts to avoid detection by human readers. Instructions ranged from simple commands like "give a positive review only" and "do...

read
Jul 7, 2025

Perplexity AI launches $200 monthly Max plan for unlimited research

Perplexity AI, the search engine that combines traditional web search with artificial intelligence to provide conversational answers, has launched its most expensive subscription tier yet. The new Perplexity Max plan costs $200 per month—or $2,000 annually—positioning the company alongside other premium AI services in an increasingly competitive market. This pricing strategy reflects a broader trend among AI companies targeting power users willing to pay premium rates for enhanced capabilities. The move comes as businesses and professionals increasingly rely on AI tools for research, content creation, and strategic analysis, creating demand for more sophisticated features and unlimited access. What Perplexity Max...

read
Jul 2, 2025

Stanford AI system turns text prompts into coordinated drone shows

Stanford and University of Zaragoza researchers have developed Gen-Swarms, an AI system that automates the complex planning process for drone light shows using simple text prompts. The breakthrough could democratize drone displays by eliminating the need for specialized engineering teams to manually plot each drone's movement frame by frame, potentially expanding applications beyond entertainment into search and rescue, construction, and space exploration. Why this matters: Current drone shows require painstaking manual programming where engineers chart the path of every single drone individually, limiting these displays to large companies with specialized expertise and significant resources. How it works: The AI system...

read
Jul 2, 2025

Study debunks “1,000+” AI bills myth fueling federal preemption push

A new analysis challenges the widely cited claim that U.S. states have proposed over 1,000 AI-related bills this year, finding that the vast majority either don't actually regulate AI or wouldn't meaningfully impact innovation. The findings come as Congress debates whether to impose a 10-year moratorium on state AI regulation, with the inflated bill count serving as a key argument for federal preemption. What the analysis found: Independent researcher Steven Adler's breakdown of the supposed "1,000+" state AI bills reveals significant mischaracterization of the legislative landscape. Roughly 40% of the bills categorized as "AI-related" don't actually focus on artificial intelligence...

read
Jul 1, 2025

Harvard study finds AI out of alignment…with successful executive business forecasting

A new Harvard Business Review study reveals that executives who used generative AI to make business predictions performed significantly worse than those who relied on traditional methods. This finding challenges the widespread assumption that AI tools automatically improve decision-making quality, particularly in high-stakes business scenarios where nuanced judgment is crucial. What you should know: The research specifically examined how generative AI affects executive-level forecasting and strategic decision-making, moving beyond previous studies that focused on routine tasks. While earlier research demonstrated AI's effectiveness for simple or repetitive work, this study tackled more complex cognitive challenges that require strategic thinking and contextual...

read
Jul 1, 2025

AI microscope gets $2.3M to automate livestock parasite testing

Appalachian State University researchers have developed an AI-driven robotic microscope designed to automate fecal egg counting for livestock parasite detection, securing $2.3 million in funding from NCInnovation, a state program that supports commercializing research discoveries. The technology aims to reduce the time and cost of parasite testing while improving accuracy, potentially benefiting North Carolina's billion-dollar agricultural industry by preventing livestock deaths and reducing unnecessary treatments. How it works: The system combines three key technologies to automate a traditionally tedious manual process. A robotic microscope automatically moves around fecal samples and generates multiple images, creating large datasets for analysis. AI algorithms...

read
Jul 1, 2025

Microsoft’s AI diagnostic system outperforms doctors 4x on complex cases

Microsoft's AI Diagnostic Orchestrator (MAI-DxO) achieved an 85% diagnostic accuracy rate on complex medical cases from the New England Journal of Medicine, more than four times higher than the 20% mean accuracy of human physicians tested. The system demonstrates how AI could enhance healthcare by improving diagnostic precision while reducing costs, though Microsoft emphasizes it's designed to assist rather than replace doctors. How it works: MAI-DxO transforms large language models into a collaborative diagnostic system that mimics real clinical reasoning processes. The system works with multiple advanced AI models including GPT, Llama, Claude, Gemini, Grok, and DeepSeek, creating what Microsoft...

read
Jul 1, 2025

AI intimacy fears deflated as just 0.5% of Claude AI conversations involve companionship

A new study by Anthropic analyzing 4.5 million Claude AI conversations reveals that only 2.9% of interactions involve emotional conversations, with companionship and roleplay accounting for just 0.5%. These findings challenge widespread assumptions about AI chatbot usage and suggest that the vast majority of users rely on AI tools primarily for work tasks and content creation rather than emotional support or relationships. What you should know: The comprehensive analysis paints a different picture of AI usage than many expected. Just 1.13% of users engaged Claude for coaching purposes, while only 0.05% used it for romantic conversations. The research employed multiple...

read
Jun 27, 2025

Coatue research reveals AI is creating a “great separation” between winners and losers

Coatue Management, a prominent crossover venture capital firm known for investing in both private and public technology companies, recently released comprehensive research from its East Meets West Conference analyzing artificial intelligence's transformative impact on business growth and market dynamics. The findings reveal a stark reality: companies are experiencing what Coatue calls "the great separation"—a widening gap between AI-powered winners achieving unprecedented growth and traditional businesses struggling to remain relevant. The research presents ten critical insights that illustrate how artificial intelligence is fundamentally reshaping competitive dynamics, capital allocation, and market valuations across the technology sector. These trends extend far beyond Silicon...

read
Jun 26, 2025

Healthcare AI hallucinates medical data up to 75% of the time, low frequency events most affected

Artificial intelligence is rapidly entering clinical healthcare settings, bringing both transformative potential and significant risks that medical professionals must navigate carefully. Two leading physicians examine how AI integration with electronic medical records could revolutionize patient care, while warning of critical challenges including AI "hallucinations" that occur up to 75% of the time. What you should know: AI demonstrates remarkable diagnostic capabilities that can match or exceed experienced specialists in a fraction of the time. A recent study analyzing over 3 million emergency room visits found AI could predict patient agitation and violence, confirming the clinical principle that "past behavior is...

read
Jun 26, 2025

ChatGPT with o3 beats specialized AI research tools

Artificial intelligence has fundamentally transformed how professionals conduct research, with AI-powered search tools now handling everything from competitive intelligence to technical due diligence. But with dozens of options available—from ChatGPT's web search to specialized "deep research" platforms—choosing the right tool for your needs isn't straightforward. A comprehensive evaluation by FutureSearch, an AI research organization, recently tested 12 different AI research tools across challenging real-world tasks, revealing significant performance gaps and unexpected findings that could reshape how businesses approach AI-assisted research. The results challenge conventional wisdom about which tools work best and when to use them. The clear winner: ChatGPT with...

read
Jun 26, 2025

Study finds people are adopting ChatGPT-ese in everyday speech

A new study reveals that people are increasingly adopting ChatGPT's distinctive vocabulary and phrasing patterns in their everyday speech, with certain AI-favored words appearing up to 50% more frequently in academic discourse. This linguistic shift could potentially flatten emotional nuance and reduce the colorful diversity that makes human communication engaging and regionally distinct. What the research found: Academics and educators are unconsciously incorporating AI-generated language patterns into their natural speech, according to researchers at the Max Planck Institute for Human Development, a research organization in Germany. The study analyzed 280,000 academic YouTube videos across more than 20,000 channels to track...

read
Jun 25, 2025

Google DeepMind’s AlphaGenome predicts genetic mutations without lab tests

Google's DeepMind has unveiled AlphaGenome, an AI model that predicts how small DNA changes affect gene activity and molecular processes. The breakthrough technology represents a significant leap beyond the company's Nobel Prize-winning AlphaFold protein-folding system, potentially accelerating genetic research and medical diagnostics by allowing certain lab experiments to be conducted virtually. What you should know: AlphaGenome unifies multiple genomic analysis challenges into a single AI system that can predict genetic variant effects at the molecular level. The model analyzes how changing individual DNA letters affects gene activity, answering questions that typically require time-consuming laboratory experiments. "We have, for the first...

read
Load More