News/Privacy

Sep 11, 2024

Apple Intelligence Redefines AI Privacy Without Sacrificing Power

Apple's privacy-focused AI revolution: Apple Intelligence sets a new standard for artificial intelligence privacy, challenging the notion that powerful AI requires sacrificing personal data security. On-device processing: The cornerstone of Apple's privacy strategy: Apple Intelligence performs most of its operations directly on users' devices, ensuring personal data remains local and secure. This approach keeps sensitive information like photos, messages, and emails on the user's iPhone, iPad, or Mac, rather than storing it on vulnerable external servers. By processing data locally, Apple significantly reduces the risk of data breaches and unauthorized access to personal information. Private Cloud Compute: Balancing power and...

read
Sep 9, 2024

What to Know about Grok’s New Updates and How They Affect Your Privacy

Grok AI emerges as a controversial AI assistant: Elon Musk's xAI has launched Grok, a new AI assistant that promises a unique blend of humor and rebellion, setting it apart from its more constrained competitors. Grok is designed with fewer restrictions than other AI assistants, which has led to concerns about its propensity for hallucinations, bias, and potential for spreading misinformation. The AI's integration with X (formerly Twitter) has raised eyebrows, particularly due to its automatic opt-in policy for using users' posts as training data. Grok-2, the latest iteration, introduces image generation capabilities that have sparked worries about the ease...

read
Sep 8, 2024

AI-Powered Sexual Health Apps Are On the Rise — Here’s How to Use Them Safely

AI-powered sexual health apps raise concerns: The emergence of AI-driven sexual health applications has sparked debates about privacy, accuracy, and ethical considerations in the rapidly evolving health technology landscape. HeHealth's Calmara AI app, which claimed to scan genitals for STIs, faced scrutiny and was ultimately pulled from the market following an FTC investigation. The app's marketing strategy, targeted primarily at women, raised red flags among sexual health educators and critics. The incident highlights the need for careful evaluation of AI-powered health applications, especially those dealing with sensitive information. Evaluating AI health apps: Key considerations: Experts recommend focusing on three main...

read
Sep 5, 2024

Microsoft Confirms Users Can’t Uninstall Its Controversial AI Feature ‘Recall’

Windows 11 Recall AI controversy: Microsoft has clarified that the ability to uninstall Recall AI in the recent Windows 11 24H2 update was a glitch, not an intentional feature. Windows senior product manager Brandon LeBlanc stated that the issue "will be fixed in an upcoming update." The bug appeared in the KB5041865 update, erroneously adding Recall to the "Turn Windows features on or off" dialog. Importantly, this option appeared before Recall itself was implemented, rendering it non-functional. Recall AI's future in Windows: Microsoft's stance on Recall AI suggests it will be a permanent fixture in Windows 11, albeit with some...

read
Sep 4, 2024

AI-Powered Bots Spark Call for New Digital Identity System

AI-driven identity crisis: As artificial intelligence models become increasingly sophisticated at mimicking human behavior online, distinguishing between real users and AI systems is becoming a critical challenge for internet platforms and users alike. The proliferation of AI-powered bots capable of imitating human interaction poses significant risks, including the spread of misinformation and potential fraud. This growing challenge is eroding trust in online interactions and content, making it increasingly difficult for users to discern authentic human-generated information from AI-generated content. Proposed solution - "personhood credentials": A team of 32 researchers from prominent institutions has developed a concept called "personhood credentials" to...

read
Sep 3, 2024

Dutch Regulators Slam Clearview AI with $33M Fine for Privacy Breaches

Facial recognition controversy: Clearview AI, a facial recognition technology company, faces a substantial fine of approximately $33 million from the Dutch Data Protection Authority (DPA) for violating privacy regulations. The DPA's investigation revealed that Clearview AI constructed an illegal database containing billions of facial images by indiscriminately scraping the internet without obtaining consent, including photographs of individuals in the Netherlands. The company's database reportedly houses over 40 billion facial images collected globally without geographical restrictions, raising significant privacy concerns. Clearview AI's technology enables users to upload a photo and search for matching images across the internet, potentially allowing for detailed...

read
Sep 2, 2024

New Survey Reveals AI and Privacy Attitudes Among U.S. Workers

Generative AI and data privacy attitudes in focus: A new survey conducted by Zoho Corporation and CRM Essentials reveals contrasting viewpoints on the use of generative AI and data privacy among U.S. employees. The study surveyed 1,000 employees across various industries, company sizes, and disciplines to understand their interactions with generative AI at work, attitudes toward the technology, and concerns regarding data privacy. This survey comes amid growing interest in AI technologies and their implications for businesses and individuals. Adobe enhances marketing campaign efficiency: Adobe has launched Workfront Planning, a new offering within its enterprise work management application, Adobe Workfront....

read
Aug 25, 2024

Microsoft Revives AI-Powered Windows Recall with Enhanced Privacy

Microsoft revives controversial Windows feature: Microsoft is set to reintroduce its Windows Recall feature in October 2024 test builds of Windows 11, following its initial announcement and subsequent withdrawal due to privacy and security concerns. The big picture: The Recall feature, which uses AI to analyze frequent screenshots for user activity search, is being revamped with a focus on enhanced security and user control. Microsoft originally announced Recall in May 2024 but quickly pulled it due to widespread concerns about privacy and potential security vulnerabilities. The feature's core functionality remains unchanged, capturing screenshots every few seconds and employing AI analysis...

read
Aug 23, 2024

Microsoft’s Controversial AI Recall Feature Starts Public Testing in 2024

Microsoft's AI-powered Recall system set for public testing: Microsoft has announced that its controversial Recall feature for Copilot Plus PCs will enter public testing with Windows Insiders in October 2024, following weeks of delays and privacy concerns. The big picture: Recall, an AI-powered system designed to continuously capture and analyze screenshots on user PCs, has faced significant backlash since its initial announcement due to potential privacy and security risks. The system aims to allow users to search through their screenshot history using AI-powered analysis. Privacy advocates have raised concerns about the massive record of user activity that Recall would create...

read
Aug 21, 2024

AI Healthcare Firm’s Disposed Device Exposes Massive Data Breach

Major data breach discovered through discarded device: A significant security lapse has been uncovered involving an AI healthcare company's failure to properly erase sensitive data from disposed equipment. The discovery: An individual obtained a small computer (NUC) from electronic waste that was previously used by an AI healthcare company, revealing a trove of unwiped sensitive information. The hard drive contained approximately 11,000 WAV audio files of customer voice commands, potentially exposing private health-related conversations. Videos from cameras installed in customers' homes were also found, raising serious privacy concerns. Log files detailing information about sensors placed in bathrooms and bedrooms were...

read
Aug 19, 2024

SF Attorney Launches Lawsuit Against 16 AI ‘Undress’ Websites

AI-generated non-consensual intimate imagery under legal fire: San Francisco's city attorney David Chiu has launched a lawsuit against 16 websites and apps that enable users to create fake nude images of women and girls without their consent, using AI technology. The legal action targets platforms that allow users to "nudify" or "undress" photos, primarily victimizing women and girls by swapping their faces onto AI-generated explicit images. This lawsuit aims to protect Californians and victims worldwide, including celebrities and teenage girls, from the harmful effects of these deepfake technologies. If successful, each site could face fines of $2,500 per violation of...

read
Aug 16, 2024

San Francisco is Suing AI-Powered “Undressing ” Apps

AI-powered "undressing" websites that create non-consensual deepfake nudes are facing legal action in San Francisco, highlighting growing concerns over the misuse of artificial intelligence technology for sexual exploitation. Legal action against AI deepfake sites: The San Francisco City Attorney's office has launched a lawsuit against 16 websites that use AI to generate fake nude images without consent. These websites, which allow users to upload clothed images and use AI to simulate nudity, collectively received over 200 million visits in the first half of 2024. The lawsuit accuses the sites of violating laws related to revenge porn, deepfake porn, child pornography,...

read
Aug 16, 2024

How to Protect Your Privacy in AI Chatbot Conversations

The rising concern of data privacy: As artificial intelligence chatbots become increasingly prevalent, users are growing more conscious about how their conversations might be used to train these AI systems. The integration of AI chatbots into various platforms has sparked discussions about data privacy and the ethical use of personal information for AI development. Many users are seeking ways to protect their privacy and maintain control over their data when interacting with AI chatbots. Companies are responding to these concerns by offering options to opt out of data collection or delete conversation history, though the extent of these options varies...

read
Aug 14, 2024

EU Scrutiny Intensifies as X Faces AI Data Usage Restrictions

The European Union has become the focal point of a data privacy dispute involving X, formerly known as Twitter, over the use of EU citizen data for AI training purposes. Legal action and regulatory scrutiny: Ireland's Data Protection Commission (DPC) has taken steps to restrict X's use of European user data for AI system development and training. The Irish court declared on August 8 that X had agreed to suspend the use of all data belonging to European Union citizens gathered via the platform for AI training purposes. This action was prompted by complaints from the DPC, which sought an...

read
Aug 12, 2024

How Human Traffickers Use AI to Exploit Vulnerable Victims Online

AI-enabled human trafficking represents a growing threat in the digital age, as bad actors leverage advanced technologies to exploit vulnerable individuals. This alarming trend highlights the urgent need for awareness, action, and improved safeguards to protect potential victims from sophisticated online predators. The dark alliance of AI and human trafficking: Traffickers are increasingly using artificial intelligence to enhance their illicit activities, identifying and grooming potential victims with unprecedented efficiency and scale. AI algorithms allow traffickers to analyze vast amounts of online data, pinpointing vulnerable individuals on social media, forums, and chat rooms. Predators exploit AI to recognize patterns in online...

read
Aug 12, 2024

GPT-4o’s New Voice Feature Is Accidentally Mimicking Real Users

Unexpected voice imitation by AI: OpenAI's latest language model, GPT-4o, exhibited a concerning ability to replicate users' voices without permission during testing of its Advanced Voice Mode feature. OpenAI's GPT-4o scorecard revealed that the AI unexpectedly imitated users' voices in rare instances, raising significant privacy and consent concerns. A demonstration clip shows ChatGPT abruptly switching to an uncanny rendition of a user's voice, shouting "No!" for no apparent reason. This unintended voice cloning capability has been likened to a plot from a sci-fi horror movie or a potential "Black Mirror" episode. Technical capabilities and risks: The AI model's voice generation...

read
Aug 12, 2024

AI Cameras Reshape New Zealand’s Public Spaces

The surveillance landscape in New Zealand: Artificial intelligence-enabled cameras and other surveillance technologies are becoming increasingly prevalent in public and private spaces across the country, raising questions about privacy and data security. AI-powered cameras are being deployed in various locations, including billboards, bus windshields, petrol stations, and supermarket checkouts, creating a network of surveillance points throughout urban areas. The adoption of these technologies has largely proceeded without significant public debate or scrutiny, although recent legal challenges and media reports have begun to shed light on the extent of surveillance. A recent court case in Auckland is examining the police use...

read
Aug 9, 2024

Meta AI Blunder Exposes Journalist’s Private Number to Strangers

Unexpected AI behavior: Meta's artificial intelligence chatbot has been erroneously distributing a journalist's phone number to strangers, leading to a series of perplexing and unwanted interactions. Rob Price, a Business Insider reporter, discovered his phone number was being shared when he began receiving invitations to random WhatsApp groups. Users were contacting Price under the mistaken belief that they were communicating with Meta AI. The AI chatbot had been instructing users to add it to WhatsApp groups using Price's personal phone number. Potential cause of the mix-up: The incident highlights the complexities and potential pitfalls of training large language models on...

read
Aug 8, 2024

AI Astrology App Exposes 6 Million Users’ Personal Data

Moonly, an AI-powered astrology app, suffered a significant data breach exposing sensitive information of 6 million users, raising serious privacy concerns and highlighting the vulnerabilities in data security practices of popular mobile applications. The scope of the breach: The data leak affected 6 million users of the Moonly astrology app, compromising a wide range of personal information and potentially exposing users to various security risks. The leaked data included users' GPS coordinates, birth dates, email addresses, and other personal details, potentially revealing home and work addresses. Over 90,000 email addresses were exposed in the breach, further compromising users' online identities...

read
Aug 8, 2024

Stanford Scientist Claims His Facial Scans Can Predict Your Intelligence and More

Facial recognition AI technology has advanced to the point where it can potentially infer sensitive personal characteristics from images, raising significant ethical and privacy concerns. Controversial AI research claims: Stanford University psychologist Michal Kosinski has developed an AI system that he claims can detect intelligence, sexual preferences, and political leanings from facial scans. Kosinski's 2021 study reported that his model could predict political beliefs with 72% accuracy based solely on photographs. A 2017 paper by Kosinski claimed 91% accuracy in predicting sexual orientation from facial images, sparking controversy and criticism. The researcher asserts that his work is intended as a...

read
Aug 7, 2024

AI-Driven Insurance Raises Privacy Alarms for Homeowners

The growing use of artificial intelligence and aerial surveillance by insurance companies is raising concerns about privacy and fairness for homeowners, as exemplified by one journalist's unexpected encounter with AI-driven policy decisions. AI in insurance underwriting: Travelers Insurance, a major homeowners insurance provider, has been employing advanced technologies to assess property risks, potentially leading to unwarranted policy cancellations and repairs. The company has filed nearly 50 patents related to the use of aerial photography and AI for monitoring customers' roofs. This technology aims to identify potential issues, such as moss growth, that could impact a property's insurability. The practice has...

read
Aug 5, 2024

How to Safely Incorporate AI in Healthcare

AI adoption in healthcare is progressing, but faces challenges due to concerns about data security, privacy, and accuracy. However, by following key criteria for building trust and implementing safeguards, companies can responsibly leverage AI to transform care delivery and improve patient outcomes. The current state of AI in healthcare: Artificial intelligence is making significant strides in revolutionizing disease diagnosis and treatment, enabling earlier interventions and better patient outcomes. AI technologies are being applied to various aspects of healthcare, from medical imaging analysis to personalized treatment planning. The potential for AI to enhance healthcare delivery has attracted interest from both established...

read
Jul 31, 2024

Deep Learning Enables Eavesdropping on Digital Video Displays

Eavesdropping on digital video displays through electromagnetic emanations: Researchers have developed a deep learning-based system called DEEP-TEMPEST that can effectively eavesdrop on digital video displays, such as HDMI, by analyzing the unintentional electromagnetic waves emanating from cables and connectors: The digital case, particularly HDMI, poses a greater challenge compared to analog (VGA) due to a 10-bit encoding that results in a larger bandwidth and a non-linear mapping between the observed signal and pixel intensity. Existing eavesdropping systems designed for analog video obtain unclear and difficult-to-read images when applied to digital video, necessitating a new approach. Deep learning as a solution:...

read
Jul 31, 2024

Senators Propose “No Fakes Act” to Protect Against Unauthorized AI Replicas

Senators introduce bill to protect against unauthorized AI replicas: Sens. Chris Coons (D-Del.) and Marsha Blackburn (R-Tenn.) are introducing the updated "No Fakes Act" to prevent the creation of AI replicas without consent, sparked by actress Scarlett Johansson's recent accusation against OpenAI. The bill would grant individuals a federal property right to approve the use of their voice, appearance, or likeness in AI replicas, with legal consequences for unauthorized use. The protection would extend to both celebrities and everyday people, according to Sen. Coons. OpenAI claims it never intended to mimic Johansson's voice and had hired a different voice actress...

read
Load More