×
Scientists skeptical of Google’s AI co-scientist tool, say it removes the fun part of their work
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google‘s “AI co-scientist” tool, built on Gemini 2.0, has received significant skepticism from the scientific community despite ambitious claims about revolutionizing research. The tool, which uses multiple AI agents to generate hypotheses and research plans through simulated debate and refinement, has been dismissed by experts who question both its practical utility and its fundamental premise. This resistance highlights a critical misunderstanding about scientific research—that hypothesis generation is often the creative, enjoyable aspect that scientists are least interested in outsourcing to AI.

Why it matters: The lukewarm reception reveals a disconnect between how AI developers envision scientific workflows and how scientists actually work, suggesting that successful scientific AI tools need to complement rather than replace the creative aspects of research.

What scientists are saying: Multiple experts have expressed doubt about the tool’s practical value in real scientific settings.

  • Sarah Beery, a computer vision researcher at MIT, stated: “This preliminary tool, while interesting, doesn’t seem likely to be seriously used.”
  • Pathologist Favia Dubyk criticized the vagueness of the results, saying “no legitimate scientist” would take them seriously.

The fundamental issue: The tool misunderstands what scientists actually want from AI assistance in their research process.

  • Lana Sinapayen from Sony Computer Science Laboratories highlighted that “generating hypotheses is the most fun part of the job,” questioning why scientists would want to outsource the enjoyable aspects of their work.
  • Steven O’Reilly from Alcyomics noted that the drug discoveries weren’t novel, as “the drugs identified are all well established.”

Potential strengths: Despite criticism, the AI co-scientist may offer some practical benefits in specific research contexts.

  • The tool could excel at quickly parsing scientific literature and creating comprehensive summaries.
  • Its ability to process large volumes of information could support certain aspects of the research process.

Between the lines: Google’s AI co-scientist faces the same fundamental challenge as other large language models—the potential for hallucinations and generating plausible-sounding but factually incorrect information.

Scientists Say Google's "AI Scientist" Is Dead on Arrival

Recent News

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.

Veo 3 brings audio to AI video and tackles the Will Smith Test

Google's latest AI video generation model introduces synchronized audio capabilities, though still struggles with realistic eating sounds when depicting the celebrity in its now-standard benchmark test.

How subtle biases derail LLM evaluations

Study finds language models exhibit pervasive positional preferences and prompt sensitivity when making judgments, raising concerns for their reliability in high-stakes decision-making contexts.