News/Cybersecurity
Microsoft report: Russia, China ramp up AI-powered cyberattacks on US
Russia, China, Iran, and North Korea have dramatically escalated their use of artificial intelligence to conduct cyberattacks and spread disinformation targeting the United States, according to Microsoft's latest digital threats report. The tech giant identified more than 200 instances in July 2025 of foreign adversaries using AI to create or amplify fake content online—a figure that has doubled since 2024 and increased tenfold since 2023, marking a pivotal shift in how America's rivals are weaponizing cutting-edge technology against U.S. interests. What you should know: AI has fundamentally transformed how state-backed hackers and influence operations target the United States, enabling unprecedented...
read Oct 16, 2025PEARL AI detects chip trojans with 97% accuracy as security gap concerns remain
Researchers at the University of Missouri have developed PEARL, an AI system that uses large language models to detect hardware trojans in computer chips with up to 97% accuracy. While this represents a significant advancement in securing the global chip supply chain, experts warn that the remaining 3% margin for error could still allow catastrophic vulnerabilities to slip through in critical systems like defense networks and medical equipment. What you should know: Hardware trojans are malicious alterations secretly embedded during chip manufacturing that can remain dormant until activated to steal data or cause device failures. These threats can be inserted...
read Oct 14, 2025FIU researchers develop blockchain defense against AI data poisoning attacks
Florida International University researchers have developed a blockchain-based security framework to protect AI systems from data poisoning attacks, where malicious actors insert corrupted information to manipulate AI decision-making. The technology, called blockchain-based federated learning (BCFL), uses decentralized verification similar to cryptocurrency networks to prevent potentially catastrophic failures in autonomous vehicles and other AI-powered systems. What you should know: Data poisoning represents one of the most serious threats to AI systems, capable of causing deadly consequences in critical applications. Dr. Hadi Amini, an associate professor of computer science at FIU, demonstrated how a simple green laser pointer can trick an AI...
readGet SIGNAL/NOISE in your inbox daily
All Signal, No Noise
One concise email to make you smarter on AI daily.
Tying tech knots: Scouting America launches AI, cybersecurity merit badges for 1M scouts
Scouting America has introduced two new merit badges focused on artificial intelligence and cybersecurity, marking the organization's latest effort to engage its one million scouts with emerging technologies. The badges represent a strategic push to maintain relevance in an increasingly digital world while addressing critical skills gaps in high-demand tech fields. What you should know: The new badges challenge scouts to explore AI's impact on daily life and develop cybersecurity awareness through hands-on learning experiences. The AI badge requires scouts to examine how artificial intelligence affects everyday activities, learn about deepfakes (digitally manipulated videos or images that can make people...
read Oct 13, 2025AI detects chip trojans with 97% accuracy in University of Missouri study
University of Missouri researchers have developed an AI-powered method to detect hardware trojans in computer chips with 97% accuracy, using large language models to scan chip designs for malicious modifications. The breakthrough addresses a critical vulnerability in global supply chains, where hidden trojans can steal data, compromise security, or sabotage systems across industries from healthcare to defense. Why this matters: Unlike software viruses, hardware trojans cannot be removed once a chip is manufactured and remain undetected until activated by attackers, potentially causing devastating damage to devices, data breaches, or disruption of national defense systems. How it works: The system leverages...
read Oct 13, 2025ChatGPT logs help convict man in deadly LA Palisades Fire case
Federal prosecutors have charged Jonathan Rinderknecht with starting the Palisades Fire in Los Angeles, citing his ChatGPT conversation history as key evidence in what experts call one of the first US cases where AI chatbot logs carry significant evidentiary weight. The case establishes a new category of digital evidence that could reshape how investigators approach criminal cases involving AI-generated content. What you should know: Prosecutors obtained ChatGPT logs showing Rinderknecht generated images of burning cities and sought advice about fire-related liability during his 911 call. Acting US Attorney Bill Essayli revealed that Rinderknecht created multiple versions of images depicting "a...
read Oct 9, 2025Study finds just 250 malicious documents can backdoor AI models
Researchers from Anthropic, the UK AI Security Institute, and the Alan Turing Institute have discovered that large language models can develop backdoor vulnerabilities from as few as 250 malicious documents inserted into their training data. This finding challenges previous assumptions about AI security and suggests that poisoning attacks may be easier to execute on large models than previously believed, as the number of required malicious examples doesn't scale with model size. What you should know: The study tested AI models ranging from 600 million to 13 billion parameters and found they all learned backdoor behaviors after encountering roughly the same...
read Oct 9, 2025Aqua Security wins AI cybersecurity solution of the year award
Aqua Security's AI protection platform, Aqua Secure AI, has been named "Cybersecurity Solution of the Year for Artificial Intelligence" in the ninth annual CyberSecurity Breakthrough Awards program. The recognition highlights the growing need for specialized security solutions as analysts predict over a billion new AI applications will be deployed by 2028, with most running in containerized environments. What you should know: Aqua Secure AI delivers comprehensive security for AI applications throughout their entire lifecycle within Kubernetes and cloud native environments. The platform embeds AI protection directly at the application layer without requiring code changes or impacting performance. It monitors prompt...
read Oct 8, 2025America rushes AI education while skipping digital literacy basics
A new executive order has thrust artificial intelligence literacy into America's educational spotlight, but this sudden urgency reveals an uncomfortable truth: we've been overlooking fundamental digital skills that students desperately need. In April 2024, an executive order called Advancing Artificial Intelligence Education for American Youth established AI literacy as a national priority for K-12 education. The initiative aims to ensure American students gain early exposure to artificial intelligence, positioning the nation as a global leader in this transformative technology. Schools across the country are now scrambling to implement AI education programs. However, this represents a striking irony in educational policy....
read Oct 7, 2025Maryland offers up to $500K for cyber-AI clinics to train workers
Maryland has launched the Cyber and Artificial Intelligence Clinic Grant program, offering up to $500,000 per recipient to colleges, nonprofits, and training providers to establish cyber and AI clinics. The first-of-its-kind state initiative aims to address thousands of vacant cybersecurity positions annually while providing digital security services to vulnerable community institutions like schools, hospitals, and small businesses. What you should know: The Maryland Department of Labor's new grant program targets both workforce development and community cybersecurity needs through a dual-purpose clinic model. Grant recipients must train at least 100 cyber professionals annually between 2027 and 2029, covering both technical roles...
read Oct 7, 2025A3 hosts robot safety conference in Houston with focus on R15.06 2025 standard
The Association for Advancing Automation (A3) will host the International Robot Safety Conference (IRSC) from November 3-5, 2025, in Houston, Texas, featuring a special focus on the new R15.06 2025 standard. The annual event comes at a time when robotics demand has reached unprecedented levels, making safety standards and risk assessment more critical than ever for manufacturers, integrators, and end users worldwide. What you should know: The conference will spotlight the R15.06 2025 standard, which represents the U.S. national adoption of ISO 10218 for industrial robot safety requirements.• Over 40 safety professionals and regulatory officials from leading organizations will lead...
read Oct 6, 2025Wanted: Google offers $20K bounty for serious Gemini AI security flaws
Google has launched a new AI Vulnerability Reward Program that pays security researchers up to $20,000 for discovering serious exploits in its Gemini AI systems. The program targets vulnerabilities that could allow attackers to manipulate Gemini into compromising user accounts or extracting sensitive information about the AI's inner workings, moving beyond simple prompt injection tricks to focus on genuinely dangerous security flaws. What you should know: The bounty program specifically rewards researchers who find high-impact AI vulnerabilities rather than harmless pranks or minor glitches. The most severe exploits affecting flagship products like Google Search and the Gemini app can earn...
read Oct 6, 2025Google DeepMind’s CodeMender AI fixes 72 security bugs automatically
Google DeepMind has unveiled CodeMender, an AI agent that automatically fixes software vulnerabilities and proactively rewrites code for better security. Over the past six months, the system has already contributed 72 security fixes to open-source projects, including some with up to 4.5 million lines of code, demonstrating AI's growing capability to address the mounting challenge of software security at scale. How it works: CodeMender leverages Gemini Deep Think models to create an autonomous debugging agent equipped with sophisticated validation tools. The system uses advanced program analysis including static analysis, dynamic analysis, differential testing, fuzzing, and SMT solvers (mathematical problem-solving tools)...
read Oct 3, 2025Only 46% can spot AI-generated phishing emails, according to survey
A global survey of 18,000 employed adults found that only 46% could correctly identify AI-generated phishing emails, while 54% either believed they were authentic human-written messages or were unsure. The findings reveal a critical vulnerability in cybersecurity awareness as artificial intelligence makes phishing attacks increasingly sophisticated and harder to detect across all age groups. What you should know: The inability to distinguish AI-generated threats spans all generations, with no significant differences in detection rates between age groups.• Gen Z correctly identified AI phishing attempts 45% of the time, millennials 47%, and both Gen X and baby boomers 46%.• When shown...
read Oct 3, 2025Business travelers on blast: Employees use AI chatbots to create fake expense receipts
Employees are increasingly using AI chatbots to create fake expense receipts for fraudulent reimbursements, exploiting easily accessible tools like ChatGPT to generate authentic-looking restaurant, hotel, and transportation bills. This emerging form of workplace fraud is becoming harder to detect as AI-generated receipts become more sophisticated, forcing some companies to revert to paper-based systems while others invest in new AI-powered detection tools. The scope of the problem: A recent PYMNTS study found that 68% of organizations encountered at least one fraud attempt through their accounts payable services, including fake employee receipt submissions. The practice involves using free online chatbots to create...
read Oct 2, 2025Long Beach offers free 90-minute AI workshops with cybersecurity focus
Long Beach is launching a comprehensive artificial intelligence education initiative this month, offering residents hands-on training in AI tools while addressing growing concerns about digital privacy and security. The program represents a notable example of how municipalities are responding to the rapid adoption of AI technologies in everyday life. The City of Long Beach's Department of Technology and Innovation (TID) will host five free community workshops throughout October, coinciding with Digital Inclusion Week. These sessions aim to demystify artificial intelligence for everyday users while teaching essential cybersecurity practices—a combination that reflects the dual challenge facing communities as AI becomes more...
read Oct 2, 2025EU’s landmark AI Act forces companies to rethink cybersecurity fundamentals
The European Union's Artificial Intelligence Act represents the world's most comprehensive AI regulation, fundamentally reshaping how organizations must approach AI security and compliance. As the latest provisions took effect on August 2nd, companies operating in or selling to EU markets face unprecedented requirements for AI system governance, particularly for applications classified as "high-risk." This groundbreaking legislation establishes the first mandatory framework for AI safety and ethics, but compliance demands more than checking regulatory boxes. Organizations must now embed security considerations throughout their AI development lifecycle, creating new operational challenges and opportunities across the technology landscape. How the Act rewrites cybersecurity...
read Oct 2, 2025Microsoft study reveals AI can design toxins that bypass biosecurity screening
Microsoft researchers have discovered that artificial intelligence can design toxins that evade biosecurity screening systems used to prevent the misuse of DNA sequences. The team, led by Microsoft's chief scientist Eric Horvitz, successfully used generative AI to bypass protections designed to stop people from purchasing genetic sequences that could create deadly toxins or pathogens, revealing what they call a "zero day" vulnerability in current biosafety measures. What you should know: Microsoft conducted a "red-teaming" exercise to test whether AI could help bioterrorists manufacture harmful proteins by circumventing existing safeguards. The researchers used several generative protein models, including Microsoft's own EvoDiff,...
read Oct 2, 2025Oof, Neon app breach exposes user recordings and data in major privacy failure
Neon, the app that pays users to share audio recordings for AI training, promises to return despite suffering a massive security breach that exposed users' phone numbers, call recordings, and transcripts to anyone who accessed the platform. The breach has raised serious legal concerns about consent violations and potential criminal liability for users who secretly recorded conversations without permission. What you should know: The security vulnerability was so severe that it allowed complete access to all user data with no authentication required. TechCrunch discovered that anyone could access phone numbers, call recordings, and transcripts of any user through the security...
read Sep 30, 2025AI creates frontier careers in security, health and science even as it eliminates traditional jobs
The chief executive of Anthropic, Claude's creator, recently warned that artificial intelligence could automate nearly half of today's work tasks within five years. Meanwhile, J.P. Morgan analysts have raised concerns about a potential "jobless recovery" driven by AI's impact on white-collar positions. These predictions paint a sobering picture of widespread job displacement across industries. However, focusing solely on job losses misses a crucial part of the story. While AI eliminates certain roles, it simultaneously creates entirely new categories of work—what could be called "frontier careers" of the AI era. These emerging fields represent areas where AI advancement generates fresh business...
read Sep 30, 2025Florida man faces 9 felony counts for using AI to create child pornography
A 39-year-old Florida man has been arrested for allegedly using artificial intelligence to create child pornography, marking a concerning development in how emerging technologies can be exploited for illegal purposes. The case highlights the growing challenge law enforcement faces as AI tools become more accessible and sophisticated, enabling new forms of digital exploitation that can destroy evidence and complicate investigations. What happened: The Marion County Sheriff's Office arrested Lucius Martin after receiving reports that he possessed child sexual abuse material on his phone, including AI-altered images of two juvenile victims. A witness discovered original photos from a social media application...
read Sep 30, 2025Study by background check platform finds AI hiring fraud costs companies $50K+ annually
A new study by Checkr, a background check platform, reveals that AI-powered fraud is rapidly outpacing employers' ability to detect deceptive hiring practices, with candidates increasingly using artificial intelligence to fake identities, qualifications, and even interviews. The research shows that nearly two-thirds of managers believe job seekers are now better at AI-enabled deception than companies are at spotting it, creating significant financial risks for organizations. The scope of the problem: Only 19% of surveyed managers expressed confidence that their hiring processes could catch fraudulent applicants, highlighting a dangerous detection gap. 59% of managers suspected candidates of using AI to misrepresent...
read Sep 30, 2025Google Drive adds AI ransomware detection to stop attacks before they spread
Google Drive is adding AI-powered ransomware detection to its desktop application, using a specialized model trained on millions of real-world ransomware samples to identify malicious file modifications before they spread. The feature automatically pauses file syncing when threats are detected, sends alerts to users, and enables file restoration, addressing the growing ransomware threat that saw 5,289 attacks worldwide in 2024—a 15% increase from the previous year. How it works: Google's AI model continuously monitors file changes on Windows and macOS systems to detect signs of ransomware activity like mass encryption or corruption attempts. When the system detects suspicious activity, it...
read Sep 29, 2025Cybercriminals use fake copyright notices to swap crypto wallet addresses
Cybercriminals are exploiting copyright fears to distribute malware through fake legal takedown notices, according to new research from Cofense Intelligence, a cybersecurity firm. The Vietnamese threat actor "Lone None" has been sending multilingual copyright violation messages that appear to come from legitimate law firms, but actually deliver malware when victims click on supposed "resolution" links. Why this matters: This campaign represents a sophisticated evolution in social engineering tactics, leveraging people's fear of copyright violations to bypass traditional security measures. Attackers are using AI tools and machine translation to create convincing takedown notices in multiple languages, expanding their global reach. Instead...
read