News/Cybersecurity

Feb 13, 2025

AI-powered tech in California higher education raises concerns for faculty, students

The California State University (CSU) system recently announced plans to become the nation's first AI-powered public university system through partnerships with major tech companies including Alphabet, Nvidia, and OpenAI. This initiative, backed by the governor's office, aims to integrate AI technology across all 23 CSU campuses, despite facing significant budget cuts and staff layoffs. The core initiative: The CSU system plans to implement AI technologies across its campuses for training, teaching, and learning purposes, though specific implementation details remain unclear. A new AI Workforce Acceleration Board, composed exclusively of technology corporation officers, will oversee the creation of an AI-skilled graduate...

read
Feb 12, 2025

Law firm brings the gavel down on AI usage after widespread staff adoption

Generative AI tools like ChatGPT and DeepSeek have seen rapid adoption in professional settings, raising concerns about data security and proper usage protocols. Hill Dickinson, a major international law firm with over 1,000 UK employees, has recently implemented restrictions on AI tool access after detecting extensive usage among its staff. Key developments: Hill Dickinson's internal monitoring revealed substantial AI tool usage, with over 32,000 hits to ChatGPT and 3,000 hits to DeepSeek within a seven-day period in early 2024. The firm detected more than 50,000 hits to Grammarly, a writing assistance tool Much of the detected usage was found to...

read
Feb 12, 2025

Chrome may deploy AI for detecting and replacing leaked passwords

The rise of data breaches has made password security increasingly critical for internet users, with browsers taking a more active role in protecting credentials. Google Chrome is developing a new feature that automatically helps users replace compromised passwords, building on its existing security capabilities. Key development: Google Chrome is testing an "Automated Password Change" feature that will not only detect compromised passwords but also generate and implement secure replacements automatically. The feature, discovered in early Chrome builds, will trigger when users log into sites with passwords found in known data breaches Google's Password Manager will encrypt and store the newly...

read
Feb 11, 2025

NY bans DeepSeek AI from government devices

The recent rise of DeepSeek, a China-based AI company, has sparked significant security concerns among U.S. government officials. New York state has taken decisive action against the AI application, joining a growing movement to restrict Chinese-developed AI tools on government devices. Key development: New York Governor Kathy Hochul has implemented a ban on DeepSeek across state government devices and networks, citing potential surveillance and censorship risks. The ban specifically prohibits state employees from downloading the application on ITS-managed devices and networks Officials expressed particular concern about DeepSeek's potential to harvest user data and steal technology secrets The move aligns with...

read
Feb 11, 2025

AI-generated fake security reports frustrate, overwhelm open-source projects

The rise of artificial intelligence has created new challenges for open-source software development, with project maintainers increasingly struggling against a flood of AI-generated security reports and code contributions. A Google survey reveals that while 75% of programmers use AI, nearly 40% have little to no trust in these tools, highlighting growing concerns in the developer community. Current landscape: AI-powered attacks are undermining open-source projects through fake security reports, non-functional patches, and spam contributions. Linux kernel maintainer Greg Kroah-Hartman notes that Common Vulnerabilities and Exposures (CVEs) are being abused by security developers padding their resumes The National Vulnerability Database (NVD), which...

read
Feb 11, 2025

Google developing AI to detect user age on YouTube in effort to fend off predators, exposure to inappropriate content

YouTube is implementing machine learning to verify user ages, addressing concerns about child safety and content access on the platform. This new system, announced as part of YouTube's 2025 initiatives, will analyze user behavior patterns to determine whether viewers are children or adults, regardless of the age they claim to be. The current challenge: YouTube faces ongoing issues with users misrepresenting their age, either to access restricted content or to influence the platform's algorithm, while also dealing with concerns about child predators and inappropriate content exposure. The platform has previously encountered scandals involving its algorithm pushing questionable material to users...

read
Feb 9, 2025

It’s time to build apps and security protocols for a new type of user: Autonomous agents

The rise of AI agents like ChatGPT Operator and coding tools such as Devin and Lovable is creating a need for businesses to design secure and efficient experiences specifically for autonomous agents interacting with their applications. The new agent paradigm: AI agents are increasingly acting on behalf of users to navigate interfaces, make requests, and execute tasks, requiring a fundamental shift in how applications handle authentication and authorization. Applications must provide secure methods for agents to authenticate and act on users' behalf Users need transparent control over agent permissions and the ability to revoke access Service providers require robust systems...

read
Feb 9, 2025

DeepSeek security vulnerabilities offer glimpse of true problems lurking in the agentic age

Chinese AI company DeepSeek's R1 model has sparked concerns about cybersecurity vulnerabilities, particularly given its open-source nature and potential risks when deployed in corporate environments. The fundamental issue: DeepSeek's R1 model, while praised for its advanced capabilities and cost-effectiveness, has raised significant security concerns due to its fewer built-in protections against misuse. Security firm Palo Alto Networks identified three specific vulnerabilities that make R1 susceptible to "jailbreaking" attacks The model's mobile app has gained widespread popularity, reaching top rankings in the Apple App Store The open-source nature of R1 means anyone can download and run it locally on a consumer...

read
Feb 8, 2025

Elon Musk’s DOGE team builds AI chatbot to scrutinize government spending

Elon Musk's Department of Government Efficiency (DOGE) is developing an AI chatbot for the US General Services Administration to analyze federal spending and contracts, raising significant concerns among IT and cybersecurity professionals. The core initiative: DOGE is creating a custom AI chatbot for the GSA, which oversees federal office buildings and IT infrastructure, to analyze government contracts and procurement data. The project aims to provide insights into federal spending patterns and allocation Thomas Shedd, head of Technology Transformation Services, characterized the effort as a continuation of existing attempts to track government spending The initiative is part of a broader "AI-first...

read
Feb 8, 2025

Microsoft Edge to start blocking scareware with new AI feature

Microsoft has introduced an AI-powered Scareware Blocker for its Edge browser, designed to protect Windows PC users from emerging tech support scams. What's new: Microsoft Edge's latest security feature uses machine learning to identify and block scareware attacks, representing a significant advancement in browser-based security protection. The feature is currently in preview mode and can be activated through Edge's Privacy settings When activated, the blocker automatically exits full-screen mode and displays warning messages about suspicious sites Users can report suspicious sites and share screenshots to help protect others The machine learning model operates locally on users' devices without sending data...

read
Feb 7, 2025

Google Photos adds crucial AI safeguard to enhance user privacy

Google Photos is implementing invisible digital watermarks using DeepMind's SynthID technology to identify AI-modified images, particularly those edited with its Reimagine tool. Key Innovation: Google's SynthID technology embeds invisible watermarks into images edited with the Reimagine AI tool, making it possible to detect AI-generated modifications while preserving image quality. The feature works in conjunction with Google Photos' Magic Editor and Reimagine tools, currently available on Pixel 9 series devices Users can verify AI modifications through the "About this image" information, which displays an "AI info" section Circle to Search functionality allows users to examine suspicious photos for AI-generated elements Technical...

read
Feb 7, 2025

AI monitors risks to undersea infrastructure in new Windward tool

Maritime AI company Windward has launched a new AI-powered solution designed to protect critical undersea infrastructure through advanced monitoring and threat detection capabilities. The innovation at hand: Windward's Critical Maritime Infrastructure Protection solution combines AI-driven behavioral detection with cable mapping and predictive analytics to safeguard subsea cables, pipelines, and offshore rigs. The system provides real-time alerts for suspicious maritime activity near critical infrastructure Integration with Dataminr enhances threat detection capabilities through analysis of publicly available data The solution includes MAI Expert, a virtual subject matter expert that delivers in-depth risk analysis Current threat landscape: Recent incidents of cable damage in...

read
Feb 7, 2025

Musk’s DOGE team is allegedly feeding sensitive government data into AI systems

Twitter CEO Elon Musk's Department of Government Efficiency (DOGE) staff members are using artificial intelligence tools to analyze sensitive Education Department data, raising concerns about data security and proper AI implementation in government operations. Operational overview: Young staffers at DOGE are leveraging AI software through Microsoft Azure to examine Education Department programs and spending patterns. The initiative involves analyzing sensitive data, including grant manager information and internal financial records DOGE employees are accessing personal information of millions of federal student loan recipients The exact nature and capabilities of the AI tool being used remain unclear Official response: Education Department leadership...

read
Feb 6, 2025

AI safety research gets $40M offering from Open Philanthropy

Open Philanthropy has announced a $40 million grant initiative for technical AI safety research, with potential for additional funding based on application quality. Program scope and structure: The initiative spans 21 research areas across five main categories, focusing on critical aspects of AI safety and alignment. The research areas include adversarial machine learning, model transparency, theoretical studies, and alternative approaches to mitigating AI risks Applications are being accepted through April 15, 2025, beginning with a 300-word expression of interest The program is structured to accommodate various funding needs, from basic research expenses to establishing new research organizations Key research priorities:...

read
Feb 6, 2025

Researchers find embedded code in DeepSeek linking it to Chinese state telco

DeepSeek, a Chinese AI company, has embedded code in its website that could potentially transmit user login data to China Mobile, a state-owned telecom company banned from US operations. Key findings: Security researchers discovered concerning code within DeepSeek's web login interface that creates a possible data pipeline to China Mobile. The code, first identified by Feroot Security and later verified by independent experts, appears integrated into the account creation and authentication system While testing in North America showed no active data transfers, researchers cannot definitively rule out data transmission for users in other regions The investigation focused solely on DeepSeek's...

read
Feb 5, 2025

AI grandma Daisy battles scammers with surprising results

Two months ago, British telecommunications provider O2 announced Daisy, an AI-powered chatbot designed to waste scammers' time. O2 is now beginning to share the results of its chatbot in action. The innovation: Daisy specifically targets phone scammers by keeping them engaged in pointless conversations. The AI bot, nicknamed Daisy, presents herself as an elderly grandmother and expertly deploys tactics like searching for glasses, discussing recipes, and reminiscing about the past Conversations can last up to 40 minutes, effectively preventing scammers from targeting actual potential victims during this time The system was trained on real scam call data, enabling it to...

read
Feb 5, 2025

India bans ChatGPT and DeepSeek for finance ministry staff

India's finance ministry has issued an internal advisory prohibiting employees from using AI tools like ChatGPT and DeepSeek for official work, citing data security concerns. Key policy details: The January 29 directive specifically addresses the use of AI applications on office computers and devices, emphasizing the potential risks to government data confidentiality. The advisory explicitly names ChatGPT and DeepSeek as examples of AI tools that pose potential security risks Three finance ministry officials have confirmed the authenticity of the internal note It remains unclear whether similar directives have been issued to other Indian government ministries International context: India's move aligns...

read
Feb 4, 2025

Anthropic will pay you $15,000 if you can hack its AI safety system

Anthropic has set out to test the robustness of its AI safety measures by offering a $15,000 reward to anyone who can successfully jailbreak their new Constitutional Classifiers system. The challenge details: Anthropic has invited researchers to attempt bypassing their latest AI safety system, Constitutional Classifiers, which uses one AI model to monitor and improve another's adherence to defined principles. The challenge requires researchers to successfully jailbreak 8 out of 10 restricted queries A previous round saw 183 red-teamers spend over 3,000 hours attempting to bypass the system, with no successful complete jailbreaks The competition runs until February 10, offering...

read
Feb 4, 2025

Meta’s new Frontier AI Framework aims to block dangerous AI models — if it can

In a new framework published by Meta, the company details how it plans to handle AI systems that could pose significant risks to society. Key framework details: Meta's newly published Frontier AI Framework categorizes potentially dangerous AI systems into "high-risk" and "critical-risk" categories, establishing guidelines for their identification and containment. The framework specifically addresses AI systems capable of conducting cybersecurity attacks, chemical warfare, and biological attacks Critical-risk systems are defined as those that could cause catastrophic, irreversible harm that cannot be mitigated High-risk systems are identified as those that could facilitate attacks, though with less reliability than critical-risk systems Specific...

read
Feb 4, 2025

DeepSeek failed every security test these researchers put it through

Key findings: Security researchers from the University of Pennsylvania and Cisco discovered that DeepSeek's R1 reasoning AI model scored zero out of 50 on security tests designed to prevent harmful outputs. The model failed to block any harmful prompts from the HarmBench dataset, which includes tests for cybercrime, misinformation, illegal activities, and general harm Other leading AI models demonstrated at least partial resistance to these same security tests The findings are particularly significant given DeepSeek's claims that its R1 model can compete with OpenAI's state-of-the-art o1 model at a fraction of the cost Security vulnerabilities: Additional security concerns have emerged...

read
Feb 3, 2025

Dutch regulator to investigate DeepSeek over privacy concerns

A European privacy regulator announced plans to investigate Chinese AI firm DeepSeek's data collection practices, marking another challenge for the company in the European Union. Key development: The Netherlands' Data Protection Authority (AP) has launched an investigation into DeepSeek's data collection practices while warning Dutch citizens about using the company's software. AP Chairman Aleid Wolfsen expressed serious concerns about DeepSeek's privacy policies and its handling of personal information The watchdog emphasized that European citizens' personal data can only be stored abroad under specific conditions that DeepSeek must follow European regulatory landscape: Multiple European nations are taking action regarding DeepSeek's data...

read
Feb 3, 2025

NASA blocks China’s DeepSeek AI over security concerns

NASA has banned employees from using China's DeepSeek AI technology and blocked access to the platform, citing security and privacy concerns related to the company's servers operating outside the United States. Policy implementation details: NASA's chief artificial intelligence officer issued a memo outlining the new restrictions to all agency personnel. NASA employees are prohibited from sharing or uploading agency data to DeepSeek products or services Access to DeepSeek is blocked on NASA-managed devices and network connections The Security Operations Center has implemented technical measures to enforce these restrictions Market impact and competitive context: DeepSeek's rapid rise in popularity has created...

read
Feb 3, 2025

METR publishes cybersecurity assessment of leading AI models from Anthropic and OpenAI

The Machine Ethics Testing and Research (METR) organization has completed preliminary evaluations of two advanced AI models: Anthropic's Claude 3.5 Sonnet (October 2024 release) and OpenAI's pre-deployment checkpoint of o1, finding no immediate evidence of dangerous capabilities in either system. Key findings from autonomous risk evaluation: The evaluation consisted of 77 tasks designed to assess the models' capabilities in areas like cyberattacks, AI R&D, and autonomous replication. Claude 3.5 Sonnet performed at a level comparable to what human testers could achieve in about 1 hour The baseline o1 agent initially showed lower performance but improved to match 2-hour human baseline...

read
Feb 3, 2025

Millions of people have downloaded DeepSeek — why deleting it may be next

The Chinese AI chatbot DeepSeek briefly claimed the top spot as the most downloaded free app, prompting swift security concerns and governmental actions. National security implications; The rapid rise of DeepSeek has triggered immediate responses from multiple government agencies and cybersecurity experts due to Chinese data sharing laws. NASA, the U.S. Navy, Texas state government, Taiwan, and Italy have implemented bans on the application Cybersecurity researchers have identified vulnerabilities in the app that could lead to data breaches The app's data collection capabilities exceed typical search engine tracking, posing heightened privacy risks Privacy concerns; DeepSeek's privacy policy offers minimal protection...

read
Load More