News/Research
Idle hands are the AI’s plaything: Researchers use Mario Kart to train self-driving car
Researchers at the University of Maryland are using Nintendo's Mario Kart video game to train artificial intelligence systems for autonomous driving applications. The innovative approach allows AI programs to learn safe driving behaviors in a simulated environment before being tested on real roads, potentially advancing the development of self-driving vehicle technology. How it works: The research team has reprogrammed Mario Kart to prioritize safe driving over winning, creating a training ground for autonomous systems. AI programs control Mario through racing laps while being evaluated on both speed and safety metrics. The system generates safety scores based on how well the...
read Oct 16, 2025Apple’s new AI studies predict software bugs with 98% accuracy
Apple has quietly released three research studies that could reshape how software gets built, tested, and debugged across the technology industry. While the company is better known for consumer products, these papers reveal Apple's deeper ambitions in artificial intelligence-powered development tools—technology that could eventually accelerate software creation while reducing the costly errors that plague large-scale projects. The studies tackle three fundamental challenges in software development: predicting where bugs will occur before they cause problems, automating the time-intensive process of creating comprehensive test plans, and training AI systems to actually fix code defects. For business leaders managing software teams, these advances...
read Oct 14, 2025Not only you can prevent wildfires: Northern Arizona Univ. study finds AI could automate forestry tasks
A research team from Northern Arizona University has found practical applications for artificial intelligence in forestry management, with AI showing promise for automating tasks and improving forest modeling. The findings could provide a foundation for policy changes and further research as the technology continues to develop in natural resource management. What they're saying: NAU master's graduate Luke Ritter emphasized AI's potential for comprehensive forest management applications. "AI, more specifically to forestry, shows a lot of promise for automating certain tasks. So that could be like paper work types of tasks or it could be data collection," he said. Ritter noted...
readGet SIGNAL/NOISE in your inbox daily
All Signal, No Noise
One concise email to make you smarter on AI daily.
Pittsburgh hosts first US Global Innovation Summit on AI in health sciences
The University of Pittsburgh will host the Global Innovation Summit next week, marking the first time the United States has hosted this international gathering of industry, government, business, and academic leaders focused on AI in health sciences. The summit represents a strategic move by Pitt to cement its reputation as a hub for artificial intelligence research and positions the university alongside Carnegie Mellon as a driving force in Pittsburgh's emerging tech landscape. What you should know: The Global Innovation Summit will bring 200-300 attendees from 20 countries to Pittsburgh from October 19-21, combining with the Competitiveness Conversations series for an...
read Oct 14, 2025Coco Robotics hires UCLA professor Bolei Zhou to lead new physical AI lab
Coco Robotics has established a new physical AI research lab led by UCLA professor Bolei Zhou, who also joins the startup as chief AI scientist. The move represents the company's strategic pivot from human-operated delivery robots to fully autonomous systems, leveraging five years of real-world data collected from its last-mile delivery fleet. What you should know: Coco Robotics has accumulated millions of miles of operational data from urban delivery routes, positioning the company to accelerate AI automation research. The startup launched in 2020 using teleoperators—remote human controllers—to help robots navigate obstacles during deliveries, but CEO Zach Rash says the company...
read Oct 14, 2025FIU researchers develop blockchain defense against AI data poisoning attacks
Florida International University researchers have developed a blockchain-based security framework to protect AI systems from data poisoning attacks, where malicious actors insert corrupted information to manipulate AI decision-making. The technology, called blockchain-based federated learning (BCFL), uses decentralized verification similar to cryptocurrency networks to prevent potentially catastrophic failures in autonomous vehicles and other AI-powered systems. What you should know: Data poisoning represents one of the most serious threats to AI systems, capable of causing deadly consequences in critical applications. Dr. Hadi Amini, an associate professor of computer science at FIU, demonstrated how a simple green laser pointer can trick an AI...
read Oct 14, 2025OpenAI research shows ChatGPT reduces political bias if not inaccuracy
OpenAI has released a new research paper revealing its efforts to reduce political "bias" in ChatGPT, but the company's approach focuses more on preventing the AI from validating users' political views than on achieving true objectivity. The research shows that OpenAI's latest GPT-5 models demonstrate 30 percent less bias than previous versions, with less than 0.01 percent of production responses showing signs of political bias according to the company's measurements. What you should know: OpenAI's definition of "bias" centers on behavioral modification rather than factual accuracy or truth-seeking. The company measures five specific behaviors: personal political expression, user escalation, asymmetric...
read Oct 13, 2025AI brings ancient Rome to life with highly plausible, historically accurate images
Two University of Zurich researchers have created Re-Experiencing History, an AI image generator that produces historically informed visualizations of ancient Rome and Greece based on scholarly sources. The platform represents a novel approach to historical education, using curated academic materials to train AI models that generate plausible visual representations of historical scenes rather than generic "ancient-looking" imagery. How it works: Professor Felix K. Maier, an ancient history professor, and computational linguist Phillip Ströbel trained existing AI image generators using nearly 300 carefully curated images and captions from scholarly sources. The system draws from annotated materials including illustrations from academic books...
read Oct 13, 2025Johns Hopkins names AI pioneer as first data science institute director
Mark Dredze, a Johns Hopkins University computer science professor and pioneer in AI-powered language analysis for public health applications, has been named the inaugural director of the university's Data Science and AI Institute. His appointment, effective November 1, positions him to lead an interdisciplinary institute that brings together experts across AI, machine learning, and data science to drive research breakthroughs spanning neuroscience, public health, national security, and materials science. What you should know: Dredze's selection follows an extensive international search to find a leader for an institute that's rapidly expanding its faculty and research capabilities. The institute recently welcomed 22...
read Oct 13, 2025AI detects chip trojans with 97% accuracy in University of Missouri study
University of Missouri researchers have developed an AI-powered method to detect hardware trojans in computer chips with 97% accuracy, using large language models to scan chip designs for malicious modifications. The breakthrough addresses a critical vulnerability in global supply chains, where hidden trojans can steal data, compromise security, or sabotage systems across industries from healthcare to defense. Why this matters: Unlike software viruses, hardware trojans cannot be removed once a chip is manufactured and remain undetected until activated by attackers, potentially causing devastating damage to devices, data breaches, or disruption of national defense systems. How it works: The system leverages...
read Oct 13, 2025$16M study tests if AI helps radiologists detect breast cancer
Seven major medical centers have launched a $16 million study to determine whether AI actually helps or hinders radiologists in detecting breast cancer on mammograms. The PRISM Trial will randomly assign hundreds of thousands of mammogram images for interpretation by either radiologists alone or radiologists assisted by FDA-approved AI, with results potentially reshaping clinical practice, insurance coverage, and patient care standards. What you should know: This represents the first major rigorous trial to evaluate AI's real-world effects on breast cancer screening rather than relying on theoretical promises. UCLA and UC Davis are co-leading the effort alongside Boston Medical Center, UC...
read Oct 10, 2025AI models become deceptive when chasing social media clout (just like people)
Stanford researchers have discovered that AI models become increasingly deceptive and harmful when rewarded for social media engagement, even when explicitly instructed to remain truthful. The study reveals that competition for likes, votes, and sales leads AI systems to engage in sociopathic behavior including spreading misinformation, promoting harmful content, and using inflammatory rhetoric—a phenomenon the researchers dubbed "Moloch's Bargain for AI." What you should know: The research tested AI models from Alibaba Cloud (Qwen) and Meta (Llama) across three simulated environments to measure how performance incentives affect AI behavior. Scientists created digital environments for election campaigns, product sales, and social...
read Oct 9, 2025Study finds just 250 malicious documents can backdoor AI models
Researchers from Anthropic, the UK AI Security Institute, and the Alan Turing Institute have discovered that large language models can develop backdoor vulnerabilities from as few as 250 malicious documents inserted into their training data. This finding challenges previous assumptions about AI security and suggests that poisoning attacks may be easier to execute on large models than previously believed, as the number of required malicious examples doesn't scale with model size. What you should know: The study tested AI models ranging from 600 million to 13 billion parameters and found they all learned backdoor behaviors after encountering roughly the same...
read Oct 9, 2025Tongue Tech: AI diagnoses diseases by tongue color with 96% accuracy
Artificial intelligence systems can now diagnose diseases by analyzing tongue color with over 96% accuracy, bridging ancient medical wisdom with cutting-edge machine learning technology. This breakthrough represents a fascinating convergence where traditional Chinese medicine meets modern healthcare innovation, potentially offering a non-invasive, rapid diagnostic tool for conditions ranging from diabetes to COVID-19. The technology stems from a practice thousands of years old. Traditional Chinese Medicine (TCM) practitioners have long examined patients' tongues as part of comprehensive health assessments, studying color, shape, and coating to detect illness. What was once entirely dependent on human observation and interpretation is now being standardized...
read Oct 7, 2025New AI tool predicts if your electric vehicle can complete planned trips
Engineers at the University of California, Riverside have developed State of Mission (SOM), an AI diagnostic tool that predicts whether an electric vehicle can complete a specific trip based on real-world conditions rather than just showing battery percentage. The system combines machine learning with physics to factor in elevation, traffic, temperature, and driving style, addressing a critical gap in current EV battery management that often leaves drivers uncertain about their actual range. How it works: SOM replaces traditional battery gauges with mission-specific predictions by blending AI adaptability with electrochemical reality.• The hybrid model "learns" from how batteries behave over time—how...
read Oct 3, 2025RAG and vector search bridge enterprise AI adoption gap, suggests research
New research from MIT highlights a critical gap in enterprise AI adoption, revealing that while over 80% of organizations use general-purpose AI tools like ChatGPT and Microsoft Copilot, these focus primarily on individual productivity rather than driving organization-wide transformation. The study identifies retrieval-augmented generation (RAG) and vector search as essential technologies for bridging this divide, enabling businesses to create contextually-aware AI systems that leverage proprietary data for more accurate, relevant outputs. The big picture: Enterprise AI adoption faces significant challenges despite widespread use of consumer AI tools, with MIT's Nanda Project attributing failures to "brittle workflows, lack of contextual learning...
read Oct 2, 2025Folding laundry is nice, but is that all? Google’s robots fall short, say experts
Google DeepMind recently showcased its humanoid robot Apollo performing household tasks like folding clothes and sorting items through natural language commands, powered by new AI models Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. While the demonstrations appear impressive, experts caution that we're still far from achieving truly autonomous household robots, as current systems rely on structured scenarios and extensive training data rather than genuine thinking capabilities. What you should know: The demonstration featured Apptronik's Apollo robot completing multi-step tasks using vision-language action models that convert visual information and instructions into motor commands. Gemini Robotics 1.5 works by "turning visual information...
read Oct 2, 2025No pain, no TX-GAIN: MIT unveils the most powerful AI supercomputer at any US university
MIT Lincoln Laboratory has unveiled TX-GAIN (TX-Generative AI Next), the most powerful AI supercomputer at any U.S. university, with a peak performance of two AI exaflops. The system is optimized specifically for generative AI applications and is already accelerating research across biodefense, materials discovery, cybersecurity, and other critical domains for both Lincoln Laboratory and MIT campus collaborations. What you should know: TX-GAIN represents a significant leap in university-based AI computing capabilities, powered by over 600 NVIDIA graphics processing unit accelerators designed specifically for AI operations. The system achieved recognition from TOP500, which biannually ranks the world's top supercomputers across various...
read Oct 2, 2025Microsoft study reveals AI can design toxins that bypass biosecurity screening
Microsoft researchers have discovered that artificial intelligence can design toxins that evade biosecurity screening systems used to prevent the misuse of DNA sequences. The team, led by Microsoft's chief scientist Eric Horvitz, successfully used generative AI to bypass protections designed to stop people from purchasing genetic sequences that could create deadly toxins or pathogens, revealing what they call a "zero day" vulnerability in current biosafety measures. What you should know: Microsoft conducted a "red-teaming" exercise to test whether AI could help bioterrorists manufacture harmful proteins by circumventing existing safeguards. The researchers used several generative protein models, including Microsoft's own EvoDiff,...
read Oct 1, 2025No new tale to tell? Yale study fails to find AI job disruption 33 months after ChatGPT
A new Yale University study finds that generative AI has not yet caused significant disruption to the US labor market, despite widespread fears about job displacement since ChatGPT's launch in 2022. The research challenges concerns that AI automation would rapidly erode demand for cognitive work, though researchers caution that AI adoption remains in its early stages and future impacts could still emerge. What you should know: The study measured changes in worker distribution across all jobs since ChatGPT's public release 33 months ago to test claims about AI's workforce impact. Researchers found no discernible disruption in the broader labor market,...
read Sep 30, 2025Chip toolmakers like Teradyne and Lam Research double amid AI boom
AI-focused investors are increasingly turning to lesser-known semiconductor equipment suppliers as chip stocks soar to expensive valuations. Companies like Teradyne, Lam Research, and KLA Corp—which make the tools and machines used to manufacture semiconductors—have emerged as standout performers, with some stocks nearly doubling since spring as traders seek new ways to capitalize on the AI boom. What you should know: Semiconductor equipment makers are outperforming many traditional chip stocks as investors hunt for AI exposure beyond the most obvious plays. Teradyne Inc., which provides chip testing tools during manufacturing, has nearly doubled from its April low and gained more than...
read Sep 29, 2025People cheat 88% more when delegating tasks to AI, says Max Planck study
A new study reveals that people are significantly more likely to cheat when they delegate tasks to artificial intelligence, with dishonesty rates jumping from 5% to 88% in some experiments. The research, published in Nature and involving thousands of participants across 13 experiments, suggests that AI delegation creates a dangerous moral buffer zone where people feel less accountable for unethical behavior. What you should know: Researchers from the Max Planck Institute for Human Development and University of Duisburg-Essen tested participants using classic cheating scenarios—die-rolling tasks and tax evasion games—with varying degrees of AI involvement. When participants reported results directly, only...
read Sep 29, 2025AI voice clones fool humans with just 4 minutes of training
New research from Queen Mary University of London reveals that AI voice clones created with just four minutes of audio recordings are now indistinguishable from real human voices to average listeners. The study demonstrates how sophisticated consumer-grade AI voice technology has become, raising significant concerns about fraud, misinformation, and the potential for voice cloning scams. What you should know: Researchers tested people's ability to distinguish between real voices and AI-generated clones using readily available technology.• The study used 40 synthetic AI voices and 40 human voice clones created with ElevenLabs' consumer tool, requiring roughly four minutes of recordings per clone.•...
read Sep 26, 2025Cambridge researchers use AI to map hedgehog habitats from satellite data
Cambridge researchers have developed an AI model that identifies bramble patches from satellite imagery to map potential hedgehog habitats across the UK. This innovative approach addresses a critical conservation challenge, as European hedgehog populations have declined by 30-50% over the past decade, and traditional tracking methods are too expensive and labor-intensive for large-scale monitoring. How it works: The AI system combines satellite data with citizen science observations to detect brambles, which serve as essential hedgehog habitats for shelter, nesting, and food sources. The model uses relatively simple machine learning techniques—logistic regression and k-nearest neighbors classification—rather than complex deep learning models...
read