News/Research

Feb 22, 2025

Scientists achieve AI breakthrough allowing retrieval of digital information from DNA

DNA storage has emerged as a promising solution for long-term data preservation, offering both incredible storage density and durability measured in thousands of years. A breakthrough by researchers at the University of California, San Diego, has dramatically accelerated the process of retrieving digital information stored in DNA sequences. Key Innovation: A new AI-powered system called DNAformer can decode DNA-stored data in just 10 minutes, compared to the days required by traditional methods, while maintaining high accuracy. The system combines three key components: a deep learning AI model for sequence reconstruction, an error-correction algorithm, and a decoding algorithm that converts DNA...

read
Feb 22, 2025

AI created a mysterious alien microchip, and experts can’t explain why it works

The global wireless chip industry, valued at $4.5 billion, relies heavily on human expertise for designing microchips used in everything from smartphones to air traffic radar systems. A groundbreaking study published in Nature demonstrates how artificial intelligence can not only design these chips but potentially outperform human engineers, though the resulting designs defy conventional understanding. The breakthrough discovery: Princeton researchers have successfully used deep learning to create functional wireless microchip designs that exhibit superior performance compared to traditional human-designed counterparts. The AI-generated chips feature seemingly random, alien-like shapes that challenge human comprehension Lead researcher Kaushik Sengupta emphasizes that these unconventional...

read
Feb 21, 2025

Go small or go home: SLMs outperform LLMs with test-time scaling

The rapid advancement in language model technology has led to surprising discoveries about the capabilities of smaller models. A recent study by Shanghai AI Laboratory demonstrates that Small Language Models (SLMs) can surpass the performance of much larger models in specific reasoning tasks when equipped with appropriate test-time scaling techniques. Core findings: Test-time scaling (TTS) techniques enable a 1 billion parameter language model to outperform a 405 billion parameter model on complex mathematical benchmarks, challenging conventional assumptions about model size and performance. The study demonstrates that strategic application of compute resources during inference can dramatically enhance small model performance Researchers...

read
Feb 20, 2025

Stanford AI Lab merges with HAI under Carlos Guestrin’s leadership

The Stanford Artificial Intelligence Lab (SAIL), founded in 1963 by Professor John McCarthy, has been a pioneering force in AI development for six decades. Stanford University has now appointed Carlos Guestrin, the Fortinet Founders Professor of Computer Science, as SAIL's new director as the lab joins forces with the Stanford Institute for Human-Centered AI (HAI). Leadership transition and strategic vision: Carlos Guestrin, a distinguished leader in machine learning with extensive experience in both academia and industry, takes over from Christopher Manning to lead SAIL into its next phase of innovation. Guestrin brings valuable experience from leadership roles at Apple's machine...

read
Feb 20, 2025

What makes Google’s new Co-Scientist AI tool so powerful

Google's AI Co-scientist represents a significant advancement in using AI to generate scientific hypotheses, demonstrating the ability to produce research proposals in days rather than years. Core innovation: Google has enhanced Gemini 2.0 with a sophisticated multi-agent system that generates and evaluates scientific hypotheses through an intensive computational process known as test-time scaling. The system uses specialized agents for generation, reflection, ranking, evolution, proximity, and meta-review to formulate research hypotheses In a notable demonstration, the AI Co-scientist generated a bacterial evolution hypothesis in two days that matched conclusions from a decade-long human study at Imperial College London Technical framework: Test-time...

read
Feb 20, 2025

How OpenAI’s new Deep Research AI system is outperforming the most brilliant humans

OpenAI's new Deep Research AI tool represents a significant development in the field of automated analysis and research. Released in February 2025, this tool combines advanced language models with autonomous research capabilities to produce comprehensive analytical reports that rival human-generated content. Core technology breakdown: OpenAI's Deep Research integrates two key technological components to deliver its capabilities. The system is powered by OpenAI's o3 model, which achieved an 87.5% score on the ARC-AGI benchmark for problem-solving abilities It utilizes agentic RAG (Retrieval Augmented Generation) technology to autonomously search the internet and other sources for information The combination allows Deep Research to...

read
Feb 20, 2025

Google’s AI scientist solves decades-long antibiotic-resistant bacteria puzzle in 2 days

The development of antibiotic-resistant bacteria, known as superbugs, represents one of the most pressing challenges in modern medicine. A breakthrough in understanding how these dangerous pathogens spread between species has emerged through an unexpected collaboration between traditional scientific research and artificial intelligence. The breakthrough discovery: Google's AI tool "co-scientist" independently reached the same conclusions about superbug transmission mechanisms that took a research team at Imperial College London a decade to uncover and prove. Professor José R Penadés and his team discovered that superbugs can form virus-like tails enabling them to spread between different host species When presented with a simple...

read
Feb 20, 2025

Open-source AI speeds up patient-matching for clinical trials

The doctor will see you right now... The healthcare industry has long struggled with efficiently matching patients to clinical trials, with traditional methods taking hundreds of days and resulting in 80% of trials missing enrollment targets. Mendel AI, a clinical AI platform, is addressing this challenge by combining open source AI models with specialized healthcare technology to reduce matching time to just one day. The innovation breakthrough: Mendel AI's Hypercube platform integrates Meta's open source Llama model with a clinical hypergraph to revolutionize patient matching and clinical trial management. The platform enables healthcare companies to organize data on their own...

read
Feb 19, 2025

White Collar Woes: How AI workforce impacts vary geographically from past tech shifts

The adoption of generative AI in workplaces is showing markedly different geographical patterns compared to previous waves of automation technology. While past technological disruptions primarily affected manufacturing and manual labor jobs in rural areas, generative AI is poised to have its most significant impact in urban centers with high concentrations of knowledge workers. Key workforce impacts: Generative AI's influence on labor markets represents a significant shift from historical patterns of technological displacement. A substantial 30% of workers could see half or more of their tasks affected by generative AI, while 85% of workers may experience at least 10% task modification...

read
Feb 19, 2025

Georgia Tech PhD student trains humanoid robots with AR glasses

Call it magnificent mimicry. The rapid advancement in humanoid robotics has been limited by slow, manual data collection methods requiring direct robot operation. Georgia Tech researchers have developed a breakthrough approach using Meta's Project Aria glasses to capture human behaviors that can train robots more efficiently. Key innovation: EgoMimic, developed by PhD student Simar Kareer at Georgia Tech's Robotic Learning and Reasoning Lab, uses egocentric recordings from Aria glasses to create training data for humanoid robots. The framework combines human-recorded data with robot data to teach robots everyday tasks Traditional robot training requires hundreds of manual demonstrations through direct robot...

read
Feb 19, 2025

Google’s AI research assistant aims to empower scientists, but novel discoveries remain to be seen

The development of AI tools to assist scientific research has been accelerating, with tech giants investing heavily in specialized systems. Google's latest experimental AI system aims to help scientists analyze literature, generate hypotheses, and plan research by leveraging multiple AI agents working in concert. System capabilities and functionality: Google's unnamed AI "co-scientist" tool builds on the company's Gemini large language models to provide rapid scientific analysis and hypothesis generation. The system generates initial ideas within 15 minutes of receiving a research question or goal Multiple Gemini AI agents debate and refine hypotheses over hours or days The tool can access...

read
Feb 18, 2025

Perplexity unveils free AI tool for in-depth research

Information wants to be free, as was once said. In that spirit, Perplexity has an AI offer too good to refuse. As AI companies race to develop more sophisticated research tools, Perplexity has introduced "Deep Research," a new AI-powered research assistant that synthesizes information from hundreds of sources. The tool's launch comes amid similar offerings from industry giants like OpenAI's ChatGPT and Google's Gemini, but with a distinctive approach to accessibility. Key Features and Capabilities: Perplexity's Deep Research tool delivers comprehensive reports by analyzing multiple sources, with particular strength in finance, marketing, and technology domains. The system takes 2-4 minutes...

read
Feb 14, 2025

Perplexity AI is the research and answer engine Gemini wants to be

Strong competition in the AI assistant space has led to the emergence of Perplexity AI as a compelling alternative to Google's Gemini. After extensive testing across Android, Linux, and MacOS platforms, Perplexity has demonstrated superior capabilities in several key areas, particularly in search and information retrieval. Core advantages: Perplexity distinguishes itself from Gemini through its versatility as a default search engine option and more comprehensive response system. Users can set Perplexity as their default search engine in web browsers, a functionality not available with Gemini The platform delivers more detailed and contextual responses compared to Gemini's simpler bullet-point format Real-time...

read
Feb 14, 2025

AI models improve with less human oversight, new study finds

Artificial intelligence researchers at Hong Kong University and UC Berkeley have discovered that language models perform better when allowed to develop their own solutions through reinforcement learning rather than being trained on human-labeled examples. This finding challenges conventional wisdom about how to best train large language models (LLMs) and vision language models (VLMs). Key research findings: The study compared supervised fine-tuning (SFT) with reinforcement learning (RL) approaches across both textual and visual reasoning tasks. Models trained primarily through reinforcement learning showed superior ability to generalize to new, unseen scenarios Excessive use of hand-crafted training examples can actually impair a model's...

read
Feb 13, 2025

AI consciousness debate sparks new scientific inquiry, ponders AI-animal hybrids

The nature of consciousness and its detection in non-human entities has been a longstanding philosophical and scientific challenge. Recent developments in artificial intelligence and animal cognition studies have brought new urgency to understanding how we determine if other beings experience consciousness. Fundamental premise: Consciousness remains inherently private and impossible to directly observe in others, leading humans to rely on specific indicators to infer its presence. Humans primarily attribute consciousness based on three key factors: behavioral similarity, physical resemblance, and verbal communication These attribution mechanisms work reliably for other humans but become more complex when applied to animals or AI Animal...

read
Feb 13, 2025

Google’s powerful new AI research agent now available on iPhone

Google's Gemini, an advanced AI assistant, has expanded its Deep Research feature to iOS devices, offering Gemini Advanced subscribers a sophisticated research tool. The feature, which first appeared on web browsers in December before rolling out to Android devices, represents Google's latest effort to transform how users interact with online information. Core functionality: Deep Research leverages AI to autonomously conduct comprehensive web research, moving beyond simple link aggregation to deliver organized, detailed reports. The tool processes search results by reading through linked content and synthesizing information into coherent research documents Users can customize research parameters and modify the AI's approach...

read
Feb 12, 2025

Uncertainty Training: How AI experts are fighting back against the AI hallucination problem

Virtual assistants and AI language models have a significant challenge with acknowledging uncertainty and admitting when they don't have accurate information. This problem of AI "hallucination" - where models generate false information rather than admitting ignorance - has become a critical focus for researchers working to improve AI reliability. The core challenge: AI models demonstrate a concerning tendency to fabricate answers when faced with questions outside their training data, rather than acknowledging their limitations. When asked about personal details that aren't readily available online, AI models consistently generate false but confident responses In a test by WSJ writer Ben Fritz,...

read
Feb 12, 2025

AI usage makes us feel less intelligent, Microsoft study finds

More than a feeling? Let's hope not. The relationship between artificial intelligence and human cognitive abilities has become a significant focus of research as AI tools become more prevalent in the workplace. A new study from Microsoft Research and Carnegie Mellon University examines how regular AI usage might be affecting workers' critical thinking capabilities. Key findings: A survey of 319 weekly AI tool users in professional settings reveals growing concerns about cognitive deterioration and overreliance on artificial intelligence. Participants reported feeling less confident in their critical thinking abilities after incorporating AI tools into their work routines The study found that...

read
Feb 12, 2025

OpenAI plans to make its o3 Deep Research agent available to free and ChatGPT Plus users

The development of autonomous AI research assistants has entered a new phase with OpenAI's introduction of the o3 Deep Research agent, which can independently gather and synthesize information from various online sources. This tool, similar to but potentially more powerful than Google's Gemini-powered Deep Research, represents a significant advancement in AI-assisted research capabilities. Key Features and Functionality: OpenAI's o3 Deep Research agent operates autonomously to compile comprehensive research reports while users focus on other tasks. The system can analyze multiple digital scholarly sources and web content to generate detailed reports Users receive notifications when their requested research is complete, which...

read
Feb 12, 2025

AI coding benchmarks: Key findings from the HackerRank ASTRA report

The HackerRank ASTRA benchmark represents a significant advancement in evaluating AI coding abilities by simulating real-world software development scenarios. This comprehensive evaluation framework focuses on multi-file, project-based problems across various programming frameworks and emphasizes both code correctness and consistency. Core Framework Overview: The ASTRA benchmark consists of 65 project-based coding questions designed to assess AI models' capabilities in real-world software development scenarios. Each problem contains an average of 12 source code and configuration files, reflecting the complexity of actual development projects The benchmark spans 10 primary coding domains and 34 subcategories, with emphasis on frontend development and popular frameworks Problems...

read
Feb 12, 2025

LangChain research: More tools, more steps, more problems for AI agents

The rapid development of AI agents has led organizations to question whether single agents can effectively handle multiple tasks or if multi-agent networks are necessary. LangChain, an orchestration framework company, conducted experiments to determine the limitations of single AI agents when handling multiple tools and contexts. Study methodology: LangChain tested a single ReAct agent's performance on email assistance tasks, focusing on customer support and calendar scheduling capabilities. The experiment utilized various large language models including Claude 3.5 Sonnet, Llama-3.3-70B, and OpenAI's GPT-4o, o1, and o3-mini Researchers created separate agents for calendar scheduling and customer support tasks Each agent underwent 90...

read
Feb 11, 2025

Political bias and xAI’s mission to develop a chatbot more like Donald Trump

The rapid advancement of artificial intelligence has raised questions about the inherent biases and values expressed by AI language models. Dan Hendrycks, director of the Center for AI Safety and advisor to Elon Musk's xAI, has developed a groundbreaking approach to measure and potentially modify the political and ethical preferences embedded in AI systems. Key innovation: Hendrycks' team has created a methodology to quantify and adjust the value systems expressed by AI models using economic principles to calculate their underlying "utility functions." The technique allows researchers to assess and potentially modify how AI systems respond to various scenarios, including political...

read
Feb 11, 2025

Is AI helping or hurting workers? New research from Anthropic has some answers

The continued development of artificial intelligence tools has sparked intense debate about their impact on employment and productivity. Anthropic, an AI company competing with OpenAI and others, has released a study examining how millions of users interact with its Claude chatbot for work-related tasks. Key findings: Anthropic's research suggests AI is more likely to augment rather than replace most workers, with only about 4% of jobs facing significant disruption from AI capabilities. The study found 57% of users employed Claude to enhance or improve existing tasks, while 43% used it for full task automation Tasks ranged from software coding assistance...

read
Feb 11, 2025

Quantum computing leverages AI to discover new cancer drug candidates

The discovery of new cancer-fighting drugs has long been hindered by certain proteins considered "undruggable," but researchers have now developed an innovative approach combining quantum computing with artificial intelligence. Scientists at the University of Toronto and Insilico Medicine have demonstrated a new method for creating anti-cancer molecules that target previously unreachable proteins, marking a significant advancement in drug discovery. The big picture: A groundbreaking study published in Nature Biotechnology showcases how a hybrid quantum-classical AI system can generate potential drug candidates targeting the KRAS gene, a major driver of multiple cancer types. The research team developed a hybrid quantum-classical generative...

read
Load More