News/Regulation

Aug 2, 2024

California AI Safety Receives Widespread Criticism from AI Community

A new bill authored by Sen. Scot Wiener is making its way through the California Legislature with the intent to prevent AI from causing catastrophic effects. The proposed legislation, Senate Bill 1047, requires developers to conduct safety testing prior to public deployment and for the same reason is drawing strong opposition from various stakeholders in the AI community. Key provisions of the bill: SB 1047 seeks to balance fostering AI innovation with managing associated risks: AI developers would be required to safely test advanced AI models before training or releasing them to the public. The state attorney general would have...

read
Aug 1, 2024

Biden Administration Calls for Increased Monitoring of Open-Weight AI Models

The Biden administration is calling for increased monitoring of open-weight AI models to assess potential risks and inform future regulations, while acknowledging the current lack of capacity to effectively respond to these risks. Key focus areas for monitoring open-weight models: The National Telecommunications and Information Administration (NTIA) suggests focusing on three main areas to address the risks posed by open-weight AI models: Collecting evidence on the capabilities of the models to monitor specific risks Evaluating and comparing indicators of these risks Adopting policies that target the identified risks Defining open-weight models and their unique challenges: Open-weight models are foundation models...

read
Jul 31, 2024

Senators Propose “No Fakes Act” to Protect Against Unauthorized AI Replicas

Senators introduce bill to protect against unauthorized AI replicas: Sens. Chris Coons (D-Del.) and Marsha Blackburn (R-Tenn.) are introducing the updated "No Fakes Act" to prevent the creation of AI replicas without consent, sparked by actress Scarlett Johansson's recent accusation against OpenAI. The bill would grant individuals a federal property right to approve the use of their voice, appearance, or likeness in AI replicas, with legal consequences for unauthorized use. The protection would extend to both celebrities and everyday people, according to Sen. Coons. OpenAI claims it never intended to mimic Johansson's voice and had hired a different voice actress...

read
Jul 30, 2024

Bipartisan Bill Aims to Carve Out AI Deepfakes from Section 230

A bipartisan bill aims to combat AI deepfakes by amending Section 230 protections for tech platforms that fail to address the issue, potentially signaling a new approach to regulating online harms. Key provisions of the Intimate Privacy Protection Act: The proposed legislation, introduced by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA), targets cyberstalking, intimate privacy violations, and digital forgeries: The bill would amend Section 230 of the Communications Act of 1934, removing legal immunity for platforms that fail to combat these harms. It establishes a "duty of care" for platforms, requiring them to have a reasonable process for addressing...

read
Jul 30, 2024

Microsoft Urges Congress to Regulate AI Deepfakes

The race to regulate AI-generated deepfakes heats up as Microsoft urges Congress to take action against the potential threats posed by this rapidly advancing technology, which could have far-reaching implications for politics, privacy, and public trust. Microsoft's call to action: In a recent blog post, Microsoft vice chair and president Brad Smith stressed the urgent need for policymakers to address the risks associated with AI-generated deepfakes: Smith emphasized that existing laws must evolve to combat deepfake fraud, as the technology can be used by cybercriminals to steal from everyday Americans. Microsoft is advocating for a comprehensive "deepfake fraud statute" that...

read
Jul 29, 2024

Musk Defends Deepfake of VP Harris as Parody

Elon Musk defended sharing a deepfake video of VP Kamala Harris, arguing it's a protected parody despite Gov. Newsom's vow to crack down on misleading political content. Musk shares deepfake video: Tesla CEO Elon Musk shared an AI-generated video of presidential candidate Kamala Harris, which features a synthetic voice mocking her campaign with statements she never actually made. The video was created by a YouTube user known as Mr. Reagan and was labeled as a parody when originally shared on Twitter/X. Musk, a prominent Trump supporter, re-shared the video without any disclaimer about its fake nature. Gov. Newsom condemns video,...

read
Jul 28, 2024

The Latest in the SAG-AFTRA Strike Against Game Publishers

SAG-AFTRA has called for a strike against major game publishers over concerns about the use of AI in games, particularly regarding voice acting and motion capture performances. Key sticking point: AI training using motion capture data; While the game companies have offered some AI protections for voice performers, the union is demanding that motion capture and stunt performers also have the right to informed consent and fair compensation for the use of their performances in AI training. The strike affects over 160,000 SAG-AFTRA members working on games produced by Disney, Electronic Arts, Blizzard Activision, Take-Two, WB Games, and others. However,...

read
Jul 25, 2024

Senate Passes Landmark Bill to Combat Nonconsensual Deepfake Porn

The DEFIANCE Act, a bipartisan bill to provide legal recourse to victims of non-consensual deepfake pornography, has unanimously passed the Senate and now heads to the House. Key legislative details: The DEFIANCE Act amends the Violence Against Women Act to allow victims to sue producers, distributors, or recipients of deepfake porn if they knew or recklessly disregarded the lack of consent: The bill provides a civil cause of action for both adults and minors, becoming the first federal law to do so if passed by the House. Recent amendments clarify the definition of "digital forgery," update available damages, and add...

read
Jul 24, 2024

Senate Passes DEFIANCE Act, Enabling Victims to Sue Deepfake Creators for Damages

The U.S. Senate passed the DEFIANCE Act, a bill that allows victims of nonconsensual intimate AI-generated images, or "deepfakes," to sue the creators for damages, marking a significant step in addressing the growing problem of AI-enabled sexual exploitation. Key provisions of the DEFIANCE Act: The bill enables victims of sexually explicit deepfakes to seek civil remedies against those who created or processed the images with the intent to distribute them: Identifiable victims can receive up to $150,000 in damages, which can be increased to $250,000 if the incident is connected to sexual assault, stalking, harassment, or if it directly caused...

read
Jul 24, 2024

ACLU Argues New Laws Regulating Deepfakes Infringe on Free Speech

The ACLU is fighting to protect free speech rights related to AI-generated content, arguing that some of the new laws regulating deepfakes and other AI outputs conflict with the First Amendment. This stance is leading to an uncomfortable reckoning for the movement to control AI. Key takeaways: AI itself has no rights, but people using AI to communicate have First Amendment protections. The ACLU contends that citizens have a constitutional right to use AI to spread untruths, just as they do with other forms of speech. Restricting who can listen to AI-generated speech would also infringe on the "right to...

read
Jul 23, 2024

FTC Probes 8 Companies for AI Surveillance Pricing Practices

The FTC is investigating the use of AI-powered surveillance pricing, which could exploit consumers' personal data to charge them higher prices. Key aspects of surveillance pricing: Surveillance pricing, also known as dynamic, personalized, or optimized pricing, involves offering individual consumers different prices for the same products based on factors like: The device they're shopping on, location, demographic information, credit history, and browsing/shopping history Companies across various sectors are considering implementing or have already implemented surveillance pricing models FTC's inquiry into surveillance pricing practices: The Federal Trade Commission has ordered eight companies that offer AI surveillance pricing products and services to...

read
Jul 22, 2024

AI Companies Promised the White House Self Regulation, but Transparency is Still Lacking

The White House's voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability. One year ago, seven leading AI companies committed to a set of eight voluntary guidelines on developing AI safely and responsibly. Their progress so far shows some positive changes, but critics argue much more work is needed. Key takeaways: The commitments have led to increased testing for risks, information sharing on safety best practices, and research into mitigating societal harms from AI: Companies are conducting more red-teaming exercises to probe AI models for flaws and working with external experts to assess...

read
Jul 22, 2024

Supreme Court Ruling Has Big Implications for AI Regulation

The Supreme Court's recent decision in Loper Bright Enterprises v. Raimondo has significantly weakened federal agencies' authority to regulate various sectors, including AI, leading to uncertainty about the future of AI regulation in the U.S. Agency expertise vs. judicial oversight: The court's decision shifts the power to interpret ambiguous laws from federal agencies to the judiciary, potentially undermining the ability of specialized agencies to effectively regulate AI: Agencies like the FTC, EEOC, and FDA have expertise in AI regulation within their respective domains, while the judicial branch lacks such specialized knowledge. The majority opinion argues that courts, not agencies, have...

read
Jul 19, 2024

California AI Bill Sparks Debate and Industry Pushback

California's landmark AI safety bill sparks debate and industry pushback Key points and reactions: The introduction of California's SB 1047 bill, which requires safety testing and shutdown capabilities for large AI models, has generated strong reactions and debates: The bill passed the state senate with bipartisan support (32-1) and has 77% public approval in California according to polls, but has faced fierce opposition from the tech industry, particularly in Silicon Valley. Tech heavyweights like Andreessen-Horowitz and Y Combinator have publicly condemned the bill, arguing it will stifle innovation and push companies out of California. However, the bill's author Sen. Scott...

read
Jul 18, 2024

Meta’s EU Restrictions Highlight Regulatory Challenges Amid Rapid Advancements

Meta's decision to restrict EU access to its multimodal AI model highlights the challenges of navigating differing regulatory environments as AI technologies advance rapidly. Key developments: Meta has announced it will not release its upcoming multimodal AI model, capable of handling video, audio, images, and text, in the European Union due to regulatory uncertainties: The decision comes shortly after the EU finalized compliance deadlines for AI companies under its strict new AI Act, which requires compliance by August 2026 on issues like copyright, transparency, and certain AI use cases. Meta's move follows similar decisions by Apple to potentially exclude the...

read
Jul 18, 2024

Trump’s AI Executive Order Draft Aims To Boost Military Tech and Cut Regulations

Trump allies draft sweeping AI executive order aimed at boosting military technology and reducing regulations, signaling a potential shift in AI policy if Trump returns to the White House in 2025. Key elements of the draft order: The plan, titled "Make America First in AI," outlines a series of "Manhattan Projects" to advance military AI capabilities and calls for an immediate review of what it terms "unnecessary and burdensome regulations" on AI development: The approach contrasts with the Biden administration's executive order from last October, which imposed new safety testing requirements on advanced AI systems. The proposed order suggests creating...

read
Jul 17, 2024

Trump’s AI Executive Order Will Prioritize Military AI and Reducing Regulations

Trump allies draft sweeping AI executive order to prioritize military AI and reduce regulations if Trump returns to power in 2025. Key Takeaways: The draft order, titled "Make America First in AI," outlines a significant shift in AI policy under a potential second Trump administration: It calls for "Manhattan Projects" to advance military AI capabilities and an immediate review of "unnecessary and burdensome regulations" on AI development. This approach contrasts with the Biden administration's executive order from last October, which imposed new safety testing requirements on advanced AI systems. The proposed order suggests creating "industry-led" agencies to evaluate AI models...

read
Jul 17, 2024

FCC Proposes AI Disclosure Rule for Robocalls, Aiming to Protect Consumers

The FCC chair has proposed a new rule requiring robocalls to disclose the use of artificial intelligence (AI), aiming to protect consumers and enable informed decisions regarding these automated calls. Key elements of the proposed rule: Callers would need to obtain prior express consent to disclose their intent to use AI-generated calls. On each call, callers would be required to disclose to consumers when they are receiving an AI-generated call. The rule would provide a definition for AI-generated calls as the FCC seeks to establish guardrails around the use of this emerging technology in robocalls. Rationale behind the proposal: FCC...

read
Jul 16, 2024

Microsoft’s AI Hiring Spree Sparks UK Antitrust Probe

The UK's Competition and Markets Authority (CMA) has formally launched an investigation into Microsoft's hiring of executives from AI startup Inflection to determine if the move could undermine competition in the UK market: In March, Microsoft hired two Inflection AI co-founders, Mustafa Suleyman and Karén Simonyan, to lead its new Microsoft AI division, along with several other Inflection staff members. The CMA's preliminary investigation, started in April, aimed to assess whether these hires and Microsoft's partnership with French AI startup Mistral could shield the tech giant from competition. The formal "merger inquiry" will conclude by September 11, when the CMA...

read
Jul 15, 2024

White House Adviser: AI Promises Progress But Requires Regulation

The time for regulating AI is now, according to Biden's top tech adviser. Key takeaways: Arati Prabhakar, director of the White House's Office of Science and Technology Policy (OSTP), views AI as a pressing issue with both promising and concerning implications: As the president's chief science and tech adviser, Prabhakar is helping guide the White House's approach to AI safety and regulation, including Biden's executive order from last fall. While excited about AI's potential to accelerate progress in areas like health, climate, and public missions, Prabhakar stresses the need to manage AI's risks in order to harness its benefits. She...

read
Jul 14, 2024

Senators Introduce COPIED Act to Protect Content Creators from AI Exploitation

A bipartisan group of senators has introduced the COPIED Act, which aims to protect content creators against unauthorized use of their work to train AI models or generate AI content: Key provisions of the COPIED Act: The bill would require standards for watermarking AI-generated content with provenance information, make it illegal to tamper with this information, and allow individuals and authorities to sue for violations: The National Institute of Standards and Technology (NIST) would be tasked with creating guidelines and standards for adding watermark-like details about the origin of AI content. Removing, disabling, or tampering with this "content provenance" information...

read
Jul 12, 2024

Bipartisan Bill Aims to Protect Creators’ Rights in the Age of Generative AI

The bipartisan COPIED Act aims to protect journalists and artists from having their work used by AI models without consent. Introducing standards for authenticating AI-generated content: The COPIED Act directs the National Institute of Standards and Technology (NIST) to establish guidelines for proving the origin of content and detecting synthetic content: This includes methods like watermarking to authenticate the source of creative works. NIST will also develop security measures to prevent tampering with these authentication markers. Requiring transparency and consent for AI use of creative content: Under the bill, AI tools used for generating journalistic or creative content must allow...

read
Jul 12, 2024

EU’s AI Act Compliance Deadlines Set, Ushering in New Era of AI Regulation

The EU's landmark AI Act sets compliance deadlines for tech companies, beginning the countdown to a new era of AI regulation. The sweeping set of rules aims to protect citizens' rights and ensure transparency in the development and use of AI systems. Key compliance deadlines: The AI Act will come into effect on August 1st, 2024, with several compliance deadlines tied to this date: By February 2nd, 2025, companies must comply with bans on AI applications that pose an "unacceptable risk," such as biometric categorization, emotion recognition in sensitive settings, social scoring systems, and certain predictive policing tools. By May...

read
Jul 11, 2024

Microsoft, Apple Retreat from OpenAI Board Amid Regulatory Scrutiny and Legal Battles

Microsoft and Apple pull back from OpenAI board roles as regulatory scrutiny intensifies: Microsoft and Apple have decided to relinquish their advisory board positions at OpenAI amidst growing regulatory scrutiny of Big Tech's influence over leading AI startups. Microsoft, OpenAI's biggest backer and most important partner, confirmed it will abandon its advisory role on OpenAI's board of directors. Apple, which recently struck a deal to integrate ChatGPT into its products, has also withdrawn plans to take an advisory board role. The decisions come as governments in the U.S. and Europe are taking a closer look at the power dynamics between...

read
Load More