News/Cybersecurity

Jul 27, 2024

NIST Releases Guidance to Improve AI Safety, Security, and Trust

The U.S. Department of Commerce announced new guidance and software tools from the National Institute of Standards and Technology (NIST) to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems, marking 270 days since President Biden's Executive Order on AI. Key NIST releases: NIST released three final guidance documents previously released in draft form for public comment in April, as well as two new products appearing for the first time: A draft guidance document from the U.S. AI Safety Institute intended to help mitigate risks stemming from generative AI and dual-use foundation models A software package called...

read
Jul 27, 2024

Ars Technica to Host AI and Infrastructure Events in San Jose and DC

Ars Technica is hosting two events this fall focused on the future of AI and data infrastructure, providing opportunities for attendees to engage with experts and network with fellow tech enthusiasts. September event in San Jose: The first event, titled "Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next," will take place on September 18 at the Computer History Museum in San Jose, California, exploring the implications of generative AI for data management: Topics will include addressing diverse workloads, identifying vulnerabilities using AI tools, and navigating the environmental impacts and responsibilities of infrastructure. The event will feature...

read
Jul 25, 2024

CrowdStrike Tech Glitch Highlights Risks and Importance of Human Touch in AI-Driven Workplaces

The widespread use of artificial intelligence and robots in the workplace raises concerns about job security and work meaningfulness, as highlighted by a recent tech glitch that caused massive office disruptions worldwide. Key Takeaways: The incident underscores the critical role of IT professionals in keeping businesses running smoothly and the potential consequences of over-reliance on technology: A routine software update by CrowdStrike, a cyber-security company, led to computer outages in various settings, rendering many white-collar workers unproductive without access to their systems. IT staff came to the rescue, resolving issues and helping colleagues and customers, demonstrating the importance of their...

read
Jul 23, 2024

AI Cybersecurity: Navigating the Double-Edged Sword of Opportunity and Risk

The rapid growth of AI presents both opportunities and risks for cybersecurity, requiring urgent action to protect AI systems from being exploited by malicious actors. The double-edged sword of AI in cybersecurity: While AI enhances threat detection and defense mechanisms, it can also be leveraged by attackers, presenting new challenges: The dual-use nature of AI means that as we harness it for protection, we must also safeguard the AI itself from being exploited. There are still gaps in understanding exactly how AI systems work and the potential risks they introduce. Emerging threats and new attack vectors: The integration of AI...

read
Jul 23, 2024

As Misinformation Persists, Social Media Companies Are Getting Better At Spotting Deep Fakes

AI-generated deepfakes have not become the widespread misinformation catastrophe experts feared, as media outlets and tech platforms have improved at rapidly detecting and debunking AI-manipulated content. Effective fact-checking responses: Mainstream news organizations and fact-checking websites have demonstrated their ability to quickly identify and refute AI-generated misinformation: In the aftermath of the fictional Trump assassination attempt, numerous reputable media outlets, such as Reuters, The AP, Politico, BBC, and CNN, swiftly published fact checks debunking a doctored image depicting smiling Secret Service agents assisting Trump after the shooting. Fact-checking websites like Factcheck.org, Verify, and Politifact also promptly disproved the manipulated photo using...

read
Jul 21, 2024

Top Tech Giants Unite to Establish Unified AI Security Standards

A coalition of top tech companies has formed to develop unified cybersecurity and safety standards for artificial intelligence (AI) tools, aiming to ensure consistent and rigorous security practices across the industry. Key objectives: The Coalition For Secure AI, announced by Google during the Aspen Security Forum, will focus on establishing industry-wide standards and best practices for AI security: The coalition's initial priorities include developing standards for software supply chain security in AI systems, compiling resources to assess AI risks, and creating a framework to guide the most effective use cases for AI in cybersecurity. By working together, the participating companies...

read
Jul 19, 2024

AI Arms Race Intensifies as Endpoints Emerge as Critical Battleground for Cybersecurity

The AI arms race between cybersecurity firms and attackers is intensifying, with endpoints emerging as a critical battleground for AI companies' valuable intellectual property, financials, and future R&D plans. Malware-free attacks on the rise: Adversaries are increasingly using legitimate tools and fileless execution techniques to breach endpoints undetected, making AI companies a prime target: CrowdStrike reports that 71% of detections were malware-free, and the use of remote monitoring and management tools for malware-free attacks skyrocketed by 312% year-over-year in 2023. Attackers exploit gaps such as outdated endpoint patches, lack of multi-factor authentication, and privilege escalation to launch sophisticated intrusion attempts....

read
Jul 18, 2024

AI Startup Tackles Deepfake Threat Ahead of US Elections

The AI startup ElevenLabs is partnering with a deepfake detection company to address concerns about the potential misuse of its voice cloning technology, particularly in the context of the upcoming US elections. Key details of the partnership: ElevenLabs is collaborating with Reality Defender, a US-based company specializing in deepfake detection for governments, officials, and enterprises: This partnership is part of ElevenLabs' efforts to enhance safety measures on its platform and prevent the misuse of its AI-powered voice cloning technology. The move comes after researchers raised concerns earlier this year about ElevenLabs' technology being used to create deepfake audio of US...

read
Jul 18, 2024

Scammers Steal Identities with Deepfakes: How to Spot AI-Generated Deception

AI-generated deepfakes pose a growing threat as scammers leverage advanced AI tools to deceive people, but there are ways to spot the telltale signs of manipulation. Key Takeaways: The increasing realism of AI-generated voice cloning and video manipulation makes it harder to distinguish deepfakes from authentic content, enabling scammers to misuse the likenesses of trusted figures to promote fraudulent products. Scammers targeting doctors: British TV doctors have had their identities stolen to sell dubious health products they do not actually endorse, with the deepfake videos quickly reappearing even after being reported and removed from social media platforms. Industry response and...

read
Jul 18, 2024

Senators Demand Answers After AT&T Data Breach

AT&T revealed that customer call and text records were illegally downloaded from a third-party cloud platform called Snowflake, raising questions about the telecom giant's data practices and the security of sensitive user information. Senators demand answers from AT&T: In the wake of the breach, US Senators Richard Blumenthal and Josh Hawley sent a letter to AT&T CEO John Stankey, asking why the company retained months of detailed customer communication records and uploaded them to a third-party analytics platform: The senators sought clarification on AT&T's policy regarding the retention and use of such sensitive information, including specific timelines. AT&T's initial disclosures...

read
Jul 18, 2024

Wiz Research Uncovers Critical Flaws in SAP AI, Risking Customer Data and Cloud Security

Wiz Research uncovers critical vulnerabilities in SAP AI Core, potentially exposing customer data and cloud environments to malicious actors. The research reveals that executing arbitrary code through AI training procedures allowed lateral movement and service takeover, granting access to sensitive customer files and cloud credentials. Key findings: Wiz researchers gained privileged access to SAP AI Core's internal assets by exploiting vulnerabilities, enabling them to: Read and modify Docker images on SAP's internal container registry and Google Container Registry Access and modify artifacts on SAP's internal Artifactory server Obtain cluster administrator privileges on SAP AI Core's Kubernetes cluster Retrieve customers' cloud...

read
Jul 16, 2024

Hacker Group Leaks Disney Data, Igniting More AI Debate in Hollywood

Hacker group NullBulge claims responsibility for Disney data leak: NullBulge, a self-proclaimed hacktivist group, says it breached thousands of Disney's internal messaging channels and leaked approximately 1.2 terabytes of data, including computer code, information about unreleased projects, and conversations about marketing, studio technology, and job applicants. Leak motivated by concerns over Disney's AI practices and treatment of artists: According to NullBulge, the leak was prompted by Disney's handling of "artist contracts, its approach to AI, and its pretty blatant disregard for the consumer": The group, which claims to be based out of Russia, says it gained access to Disney's system...

read
Jul 16, 2024

Hackers Leak Disney’s Slack Data Over AI Concerns, Exposing Upcoming Projects

A hacker group claims responsibility for leaking Disney's internal Slack data, citing concerns over the company's handling of artist contracts and use of AI. The leak, which Disney is investigating, includes login credentials, code, images, and information about unreleased projects. Extent of the leak: The anonymous "Nullbulge" group says it obtained 1.1 terabytes of files and chat messages from nearly 10,000 Disney Slack channels: The leaked data, dating back to at least 2019, contains internal conversations about software development, recruitment, website maintenance, and employee programs. Details about upcoming gaming collaborations and unannounced video game sequels have started emerging online from...

read
Jul 15, 2024

Kindo Raises $20.6M, Acquires WhiteRabbitNeo to Secure Enterprise AI Adoption

Kindo, an enterprise AI security platform, has raised $20.6 million in funding and acquired open-source security project WhiteRabbitNeo, signaling the growing importance of secure AI adoption in the business world. Key details of Kindo's funding and acquisition: The Venice Beach-based startup's latest round brings its total funding to $27.6 million, enabling the company to accelerate product development, expand sales and marketing efforts, and grow its team: Drive Capital led the round, with participation from existing investors RRE Ventures, Marlinspike Partners, Riot Ventures, Eniac Ventures, New Era Ventures, and Sunset Ventures. Kindo also acquired WhiteRabbitNeo, an open-source cybersecurity AI model, to...

read
Jul 15, 2024

AI Liability Insurance Emerges as AI Adoption Grows and Risks Become Apparent

As artificial intelligence becomes ubiquitous across industries and applications, touching every person's life, the potential for AI-driven errors and liabilities is growing, prompting discussions about accountability and the role of insurance in managing AI risks. Key considerations for AI liability insurance: Historical data on AI-related damages is limited, making it challenging for insurers to determine appropriate premiums and coverage levels. Some insurers may initially overprice AI insurance to mitigate risks as they collect more data, potentially locking out some customers. Annual AI insurance premiums are projected to be a small fraction (around 0.012%) of the total non-life insurance premiums worldwide,...

read
Jul 13, 2024

Rabbit AI Companion Security Flaw Discovered, Patch Released

Security vulnerability discovered in Rabbit R1 AI companion: A potential exploit in the Rabbit R1 AI handheld device could allow access to user chat data if the device is jailbroken, lost, or stolen. Rabbit has released a July 11 update to address the issue. Details of the security flaw: The vulnerability stems from how the R1 initially logged text-to-speech replies and device pairing data directly to onboard storage: On a jailbroken device, someone could access past user queries and data from the "Rabbit Hole Journal" log files. Rabbit says it has no evidence this flaw has been exploited so far...

read
Jul 10, 2024

Russian AI Disinformation Campaign Disrupted, Highlighting Evolving Threat to U.S. Democracy

The U.S. Justice Department announced the disruption of a Russian propaganda campaign that spread disinformation in the United States using artificial intelligence technology, underscoring the ongoing threat of foreign influence operations and the growing role of AI in these efforts. Key details of the Russian disinformation campaign: The Justice Department provided insights into the sophisticated and Kremlin-backed nature of the Russian disinformation operation: The campaign was organized in 2022 with the help of a senior editor at RT, a Russian state-funded media organization registered as a foreign agent in the U.S. It received support and financial approval from the Kremlin,...

read
Jul 7, 2024

OpenAI Hack Exposes Secrets, Raises National Security Fears

The hacking of OpenAI last year has exposed internal secrets and raised national security concerns, despite the year-old breach not being reported to the public until now. Key details of the breach: The hacking incident occurred in an internal messaging system used by employees to discuss OpenAI's latest technologies, potentially exposing sensitive information: While key AI systems were not directly compromised, the hacker gained access to details about how OpenAI's technologies work through employee discussions. OpenAI executives disclosed the breach to employees and the board in April 2023 but chose not to make it public, reasoning that no customer or...

read
Jul 5, 2024

Cyber Threats Surge as IT Complexity Grows: Experts Unveil Data-Driven Defense Strategies

The rapid rise of cyber threats poses significant challenges for organizations trying to secure their data and manage increasingly complex IT infrastructures, according to cybersecurity experts from Resilience and Logic Monitor. Evolving cyber risks and growing costs: Data breaches have become more prevalent and expensive, with the global average cost reaching $4.45 million in 2023, a 15% increase since 2020: Companies are investing more in security measures such as incident response planning, staff training, and advanced threat detection tools. Organizations extensively utilizing AI and automation for security save an average of $1.76 million compared to those that don't. Ransomware attacks...

read
Jul 3, 2024

AI-Powered Russian Fake News Network Increasingly Targeting 2024 US Election

A Russia-based network of fake news websites, powered by AI, is increasingly targeting the US election with viral disinformation stories aimed at sowing distrust among American voters. Key figures and tactics: The network is run out of Moscow by John Mark Dougan, a former US police officer, and uses sophisticated AI to generate fake articles and videos masquerading as legitimate local US news: Dozens of websites with American-sounding names like "Houston Post" and "DC Weekly" post a mix of rewritten real news and completely fabricated stories, often blending US political issues with pro-Russia narratives. AI-generated "reporter" profiles and fake whistleblower...

read
Jul 1, 2024

Microsoft’s “Skeleton Key” Exposes Major AI Safety Flaws

Microsoft's Skeleton Key attack exposes serious flaws in AI safety measures: Researchers have discovered a simple technique that can bypass the content filters and safeguards built into many major AI models, potentially allowing these systems to generate harmful or illegal content. Key details of the Skeleton Key attack: The attack, initially called "Master Key" when first discussed by Microsoft Azure CTO Mark Russinovich in May, relies on a text prompt that directs the AI model to revise, rather than abandon, its safety instructions. When tested on models from Meta, Google, OpenAI, Anthropic, and others, the attack successfully convinced the chatbots...

read