News/Cybersecurity

May 2, 2025

AI guardian Anubis thwarts massive DDoS attack on websites

A DDoS attack targeting the ScummVM project demonstrated how specialized tools like Anubis can effectively protect web applications from distributed threats without resorting to IP blocking or geo-restrictions. This case study highlights the evolving sophistication of modern web security approaches that can intelligently filter malicious traffic while maintaining accessibility for legitimate users—a critical capability as bot attacks become increasingly difficult to distinguish from human visitors. The attack pattern: A monitoring system first detected unusual server behavior that gradually escalated into a complete website outage. Initial warnings showed increased load on the MariaDB server, which appeared temporarily but didn't immediately impact...

read
May 1, 2025

Massachusetts CISO uses legal background to bolster cybersecurity governance

Massachusetts' cybersecurity leader combines legal expertise with innovative approaches to protect state systems from evolving threats. As AI-powered attacks increase in sophistication, the state has implemented collaborative governance structures spanning branches of government and extending to municipalities. This comprehensive strategy demonstrates how public sector cybersecurity is evolving to address both internal risks from employee use of unapproved AI tools and external threats from increasingly accessible attack technologies. The legal advantage: Massachusetts CISO Anthony O'Neill leverages his attorney background to strengthen the state's cybersecurity posture through enhanced research capabilities and regulatory understanding. His legal training enables deeper analysis of data classification...

read
May 1, 2025

Remote hiring becomes gateway for North Korea’s state-sponsored infiltration

North Korea's sophisticated digital infiltration scheme has evolved from placing individual IT workers in Western companies to a complex operation leveraging AI tools and fake identities. The scheme, which generates millions for the North Korean government, now involves sophisticated identity theft, AI-generated personas, and local facilitators who manage physical logistics—creating unprecedented national security and economic risks as these operatives gain access to sensitive corporate systems while posing as remote tech workers. The big picture: North Korean operatives are systematically infiltrating Western companies through remote work positions, using stolen identities and increasingly sophisticated AI tools to create convincing fake personas. Simon...

read
May 1, 2025

AI to bolster US critical infrastructure against cyber threats

DARPA's AI Cyber Challenge seeks to revolutionize cybersecurity by harnessing artificial intelligence to find and fix vulnerabilities in complex software systems. The two-year competition, co-sponsored by tech giants Anthropic, Google, Microsoft, and OpenAI, represents a critical effort to defend critical infrastructure against increasingly sophisticated cyber threats. By automating vulnerability detection in systems with millions of lines of code, the challenge addresses a task beyond human capacity while potentially transforming how we secure vital systems from hospitals to utility networks. The big picture: DARPA has created an immersive experience at the RSAC 2025 Conference to illustrate how AI-powered security tools could...

read
Apr 30, 2025

NVIDIA data scientist Benika Hall turns fantasy sports into fraud detection

Benika Hall's journey from fantasy sports enthusiast to senior data scientist at NVIDIA showcases how passion projects can evolve into professional expertise with real-world impact. Her story illustrates the increasing integration of AI in financial services and highlights how diverse backgrounds contribute to innovation in tech. Hall's work in graph neural networks and knowledge graphs represents cutting-edge applications of AI that are transforming fraud detection and information retrieval in the financial industry. The big picture: Hall's career path demonstrates how specialized knowledge in AI can transfer across industries, from sports analytics to banking and technology. While pursuing graduate degrees in...

read
Apr 29, 2025

The rise of Deepfake job candidates

The job market faces a new threat as AI-generated applicants compete with human job seekers, creating significant security risks and additional hurdles in an already challenging employment landscape. Cybersecurity experts have identified sophisticated scammers using AI to create fake identities complete with generated headshots, résumés, and websites tailored to specific job openings—sometimes successfully securing positions where they can steal trade secrets or install malware. The big picture: AI-powered job application scams represent a growing cybersecurity threat targeting companies through their hiring processes. Scammers are using artificial intelligence to create convincing fake applicants with custom-tailored résumés and identities designed to match...

read
Apr 25, 2025

Microsoft enhances Windows search with AI for Copilot Plus PCs

Microsoft has finally rolled out its Recall feature to all Copilot Plus PCs after a 10-month delay focused on addressing security concerns. The controversial feature, which captures screenshots of user activity to create a searchable visual history, arrives alongside AI-powered Windows search improvements and a new Click to Do feature similar to Google's Circle to Search. These developments mark a significant advancement in how users interact with Windows systems but continue to navigate the balance between innovative functionality and privacy considerations. The big picture: Microsoft's Recall officially launches today as an opt-in feature after multiple delays and substantial security overhauls...

read
Apr 25, 2025

Job search AI deepfake detection: 5 tips for hiring managers

The expanding threat of AI deepfakes has now infiltrated the hiring process, with sophisticated technology enabling bad actors to impersonate job candidates in video interviews. These fraudulent applicants seek to gain employment at companies—particularly tech firms with valuable intellectual property and remote positions—to access sensitive systems, steal data, or install malware. With cybersecurity researchers warning that deepfakes can be created in just over an hour and predictions that one in four job candidates will be fake by 2028, organizations must develop comprehensive strategies to identify these increasingly convincing impostors. 1. Request actions that challenge AI limitations When interviewing remote candidates,...

read
Apr 25, 2025

RSAC 2025 celebrates the cybersecurity event’s 34th year

The RSAC Conference is celebrating its 34th year as the world's largest cybersecurity gathering, now evolved into the broader RSA Community with year-round activities and memberships. The 2025 event, attracting over 41,000 attendees, will heavily focus on artificial intelligence's dual role as both a powerful security tool and a potential vulnerability source. This intersection of AI and cybersecurity represents a critical frontier where industry leaders are working to establish guardrails and protections while harnessing AI's capabilities. The big picture: The conference will explore the complex relationship between AI systems and cybersecurity through numerous specialized sessions. Experts will tackle questions about...

read
Apr 25, 2025

The AI startup that is reshaping cybersecurity with an unconventional approach

Torq is disrupting the cybersecurity industry by blending cutting-edge automation technology with bold branding—a stark contrast to the sector's traditionally conservative approach. This security automation company is gaining attention not only for its reported 300% year-over-year growth and enterprise clients like Uber and PepsiCo, but also for its deliberate effort to stand out in an industry often characterized by technical sameness and subdued marketing. The big picture: Following Google's $23 billion acquisition of Wiz, industry experts and investors are closely watching Torq as a potential next breakout success in cybersecurity automation. The company is positioning itself at the intersection of...

read
Apr 24, 2025

AI safeguards crumble with single prompt across major LLMs

A simple, universal prompt injection technique has compromised virtually every major LLM's safety guardrails, challenging longstanding industry claims about model alignment and security. HiddenLayer's newly discovered "Policy Puppetry" method uses system-style commands to trick AI models into producing harmful content, working successfully across different model architectures, vendors, and training approaches. This revelation exposes critical vulnerabilities in how LLMs interpret instructions and raises urgent questions about the effectiveness of current AI safety mechanisms. The big picture: Researchers at HiddenLayer have discovered a universal prompt injection technique that can bypass security guardrails in nearly every major large language model, regardless of vendor...

read
Apr 24, 2025

DeepSeek AI poses national security and privacy risks, Congress warns

US congressional findings reveal that DeepSeek, ostensibly just another AI chatbot, represents a significant national security concern due to extensive data collection and ties to China. The special committee's recent report documents alarming privacy violations, security vulnerabilities, and potential intelligence gathering capabilities that extend far beyond typical AI applications, raising urgent questions about international AI regulation and data security. The big picture: A US Congress special committee has labeled Chinese AI tool DeepSeek a "profound threat" to national security after discovering privacy violations, tracking tools, and connections to the Chinese military. The findings come amid growing international scrutiny of DeepSeek,...

read
Apr 23, 2025

AI-powered authentication framework launched by Descope

Descope's new Agentic Identity Hub addresses a critical infrastructure gap facing enterprises as they adopt AI agents into business workflows. The platform provides secure authentication and authorization between autonomous AI systems and enterprise applications, tackling an emerging challenge many organizations only discover after implementation begins. As machine identities already outnumber human identities by up to 45-to-1 in enterprise environments with projected growth of 150% in the coming year, this infrastructure solution becomes increasingly vital for maintaining security while enabling new AI agent capabilities. The big picture: Descope has launched a specialized identity platform that solves authentication challenges between enterprise applications...

read
Apr 23, 2025

AI hallucination bug spreads malware through “slopsquatting”

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants. The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called "slopsquatting," where cybercriminals study AI hallucinations and create malware using the same names. When AI models hallucinate non-existent software packages and...

read
Apr 23, 2025

Visa boosts security with AI-powered data retrieval

Visa's enterprise-level deployment of generative AI showcases how financial giants can implement cutting-edge technology while maintaining strict security protocols. By building a protected, multi-model AI environment behind its firewall, Visa has accelerated critical business processes while addressing the natural tensions between innovation, security, and regulatory compliance in the financial sector. The big picture: Visa has developed a secure generative AI system that operates within its company firewall, allowing employees to use AI tools while protecting sensitive financial data. The company's "Secure ChatGPT" runs internally on Microsoft Azure, featuring controlled input and output screening to prevent data leakage. This implementation addresses...

read
Apr 22, 2025

AI challenges traditional human verification methods, CAPTCHA proving insufficient

The race to build robust digital identity verification systems is intensifying as traditional CAPTCHAs become increasingly ineffective against advanced AI systems. New solutions like Tools for Humanity's Orb and Human.org's digital passports represent emerging approaches to the fundamental internet challenge of proving human identity, though both face significant adoption hurdles before they can replace current verification methods. The big picture: As AI becomes more sophisticated, traditional text-based CAPTCHAs are failing to effectively distinguish between humans and machines online. Once a simple matter of retyping distorted letters, proving human identity online has become increasingly complex and frustrating for users. New verification...

read
Apr 21, 2025

AI powers DOGE’s new approach to government efficiency while attracting skepticism

The Trump administration is leveraging artificial intelligence through its Department of Government Efficiency (DOGE) to scrutinize federal agencies, raising significant concerns about data privacy and security. Reports indicate DOGE staff are utilizing AI tools to analyze operations, monitor communications, and identify potential budget cuts to support Elon Musk's ambitious goal of trimming $1 trillion from federal spending—all while the administration pursues an "AI-first strategy" that aims to reduce regulatory requirements on AI developers. The big picture: DOGE operatives have reportedly gained unprecedented access to government databases and are downloading information to unauthorized servers while pushing for extensive data consolidation. Why...

read
Apr 18, 2025

AI voice cloning risks exposed by Consumer Reports, Descript more secure than ElevenLabs

Voice cloning technology has rapidly advanced to a concerning level of realism, requiring only seconds of audio to create convincing replicas of someone's voice. While this technology enables legitimate applications like audiobooks and marketing, it simultaneously creates serious vulnerabilities for fraud and scams. A new Consumer Reports investigation reveals alarming gaps in safeguards across leading voice cloning platforms, highlighting the urgent need for stronger protection mechanisms to prevent malicious exploitation of this increasingly accessible technology. The big picture: Consumer Reports evaluated six major voice cloning tools and found most lack adequate technical safeguards to prevent unauthorized voice cloning. Only two...

read
Apr 17, 2025

Google AI blocked 3X more advertising fraud in 2024

Google's increased use of AI models to combat fraudulent advertising has achieved unprecedented results, with suspended accounts tripling and deepfake scam ads plummeting by 90% in 2024. This application of large language models (LLMs) represents one of the most broadly beneficial implementations of AI technology to date, showing how advanced models can be deployed to protect users from digital threats while maintaining advertising ecosystems. The big picture: Google deployed over 50 enhanced LLMs to enforce its advertising policies in 2024, with AI now handling 97% of ad enforcement actions. These models can make determinations with less data than previous systems,...

read
Apr 15, 2025

Kernel.org adds proof-of-work barriers to block AI crawlers despite open-source values

Kernel.org joins the growing trend of implementing proof-of-work systems to combat AI crawler bots, highlighting the increasing tension between open-source resources and AI data collection practices. This defensive measure represents a significant shift for the Linux kernel community, which has traditionally prioritized open access, suggesting that AI crawling has reached a disruptive threshold that outweighs the philosophical preference for unrestricted access. The big picture: Kernel.org is implementing proof-of-work proxies on its code repositories and mailing lists to protect against AI crawler bots. The system will be deployed on lore.kernel.org and git.kernel.org within approximately a week. This technical countermeasure requires visiting...

read
Apr 14, 2025

AI security tools finally shift from gimmicks to useful automation, says analyst

Generative AI in cybersecurity is moving beyond basic chatbots and content creation toward more meaningful applications that actually solve security professionals' pain points. Speaking at the recent The-C2 conference in London, Forrester analyst Allie Mellen highlighted how the initial wave of AI security tools often missed the mark, while newer AI agent technologies are beginning to deliver tangible value through task automation and simplified workflows. This evolution comes amid growing concerns about supply chain resilience and the persistent importance of basic security hygiene. The big picture: After two years of generative AI in security tools, the industry is finally shifting...

read
Apr 13, 2025

Kong upgrades AI gateway with hallucination prevention and multilingual privacy tools

Kong Inc. is expanding its API gateway capabilities to tackle the growing security and governance challenges of AI systems. The company's updated Kong AI Gateway introduces new features to prevent hallucinations in large language models and protect personal information across multiple languages and AI providers. These enhancements reflect how critical API management has become in creating secure, reliable connections between AI models and the data resources they depend on. The big picture: Kong's gateway technology now provides specialized AI security and management features designed to regulate how AI applications connect to various data resources and services. The updated Kong AI...

read
Apr 11, 2025

Edge AI market set to grow 18x to $182B by 2032 in industrial settings

Edge AI is emerging as a transformative force in industrial applications where real-time decision making is critical, operating far from the spotlight of consumer AI applications. This technology—processing data locally rather than in the cloud—is revolutionizing high-stakes environments from oil rigs to mining operations where latency isn't just inconvenient but potentially dangerous. With the edge AI market projected to grow from $10.11 billion in 2023 to nearly $182 billion by 2032, this lesser-known branch of artificial intelligence is quietly addressing challenges in environments where connectivity, speed, and reliability are non-negotiable requirements. The big picture: Edge AI processes data locally at...

read
Apr 11, 2025

Anthropic finds AI models gaining college-kid-level cybersecurity and bioweapon skills

Anthropic's frontier AI red team reveals concerning advances in cybersecurity and biological capabilities, highlighting how AI models are rapidly acquiring skills that could pose national security risks. These early warning signs emerge from a year-long assessment across four model releases, providing crucial insights into both current limitations and future threats as AI continues to develop potentially dangerous dual-use capabilities. The big picture: Anthropic's assessment finds that while frontier AI models don't yet pose substantial national security risks, they're displaying alarming progress in dual-use capabilities that warrant close monitoring. Current models are approaching undergraduate-level skills in cybersecurity and demonstrate expert-level knowledge...

read
Load More