News/Regulation

Oct 7, 2024

California’s AI legislation continues to advance despite AI safety bill veto

California's AI regulation leadership: California continues to lead the way in state-level artificial intelligence regulation, building on its history of consumer data protection with new laws addressing AI systems beyond personal data use. The California Consumer Privacy Act (CCPA) has already established the state as a leader in data protection, often mentioned alongside the European Union's General Data Protection Regulation (GDPR). Recent legislative efforts in California aim to address broader AI and machine learning system regulations, reflecting the state's role as the home of Silicon Valley and the largest state economy in the U.S. SB 1047 veto and controversy: Governor...

read
Oct 4, 2024

Judge blocks California deepfake law to protect AI-powered satire

First Amendment Victory Against California Deepfake Law: A federal judge has blocked California's AB 2839, a law designed to regulate AI-generated content in elections, citing First Amendment concerns and potential infringement on free speech rights. Key legal challenge and ruling: Christopher Kohls, a parody video creator known as "Mr Reagan" on social media platforms, sued to block the law, claiming it unconstitutionally targeted his satirical content. US District Judge John Mendez granted a preliminary injunction, agreeing that the statute infringes on free speech rights and is unconstitutionally vague. The judge acknowledged the government's interest in protecting election integrity but found...

read
Oct 3, 2024

AI pioneer warns of catastrophic future without more regulation

AI pioneer's urgent call for regulation: Yoshua Bengio, a leading figure in artificial intelligence research, is sounding the alarm on potential catastrophic risks associated with unregulated AI development and deployment. Bengio, widely recognized as one of the "godfathers of AI" due to his groundbreaking work on artificial neural networks, has shifted his focus to advocating for stringent AI regulation and safety measures. His concerns span both short-term and long-term risks, ranging from the manipulation of elections and assistance to terrorist activities to the potential loss of human control over AI systems and the emergence of AI-enabled dictatorships. Current regulatory landscape:...

read
Oct 1, 2024

EU taps AI experts to create compliance framework for AI policy

EU takes decisive step in AI regulation: The European Commission has appointed a group of AI specialists to outline compliance guidelines for businesses in anticipation of upcoming AI regulations, marking a significant move in the global governance of artificial intelligence. Key players and structure: The European Commission has assembled a diverse group of AI experts to develop a comprehensive framework for AI governance and regulation. The group includes prominent figures in the field of AI, such as Yoshua Bengio, Nitarshan Rajkumar, and Marietje Schaake, bringing together a wealth of expertise and perspectives. Four specialized working groups have been established, each...

read
Oct 1, 2024

California AI safety bill veto may give smaller AI models a chance to flourish

California's AI bill veto: A win for innovation and open-source development: Governor Gavin Newsom's decision to veto SB 1047, a bill that would have imposed strict regulations on AI development in California, has sparked mixed reactions from industry leaders and policy experts. The vetoed bill would have required AI companies to implement "kill switches" for models, create written safety protocols, and undergo third-party safety audits before training models. It would have also granted California's attorney general access to auditors' reports and the right to sue AI developers. Critics of the bill argued that it could have a chilling effect on...

read
Sep 30, 2024

AI drama and breakthroughs shake up tech landscape

OpenAI's corporate transformation and leadership changes: OpenAI is undergoing a significant shift towards a for-profit model, aiming to attract external investors and potentially secure a massive funding round that could elevate its valuation to around $150 billion. The company's transition has been marked by the departure of several high-ranking executives, including Chief Technical Officer Mira Murati, Chief Research Officer Bob McGrew, and VP of Research Barret Zoph. Despite their resignations, the departing executives have expressed continued support for OpenAI, citing personal reasons such as exploring new opportunities or taking breaks. Meta's breakthrough in multimodal AI: Meta has released Llama 3.2,...

read
Sep 30, 2024

California cracks down on AI-generated child deepfakes

California takes bold steps to protect minors from AI-generated sexual imagery: Governor Gavin Newsom has signed two bills aimed at safeguarding children from the misuse of artificial intelligence to create explicit sexual content. The new laws close a legal loophole around AI-generated child sexual abuse imagery and clarify that such content is illegal, even if artificially created. District attorneys can now prosecute individuals who possess or distribute AI-generated child sexual abuse images as a felony offense, without needing to prove the materials depict a real person. These measures received strong bipartisan support in the California legislature. Broader context of AI...

read
Sep 30, 2024

A new AI safety initiative launches as Newsom vetoes California bill

Governor Newsom vetoes major AI regulation bill: California Governor Gavin Newsom has vetoed SB1047, a comprehensive artificial intelligence regulation bill authored by State Senator Scott Wiener, citing concerns about its broad scope and potential impact on AI innovation in the state. The bill aimed to establish safety and testing requirements for large-scale AI programs to prevent catastrophic risks. Newsom argued that the bill's stringent standards applied even to basic functions of large systems, potentially hindering beneficial AI development. The governor expressed concern that the bill's focus on large-scale models could create a false sense of security, overlooking potential dangers from...

read
Sep 30, 2024

Why Newsom vetoed AI safety bill SB 1047 and what comes next

California's AI regulation setback: Governor Gavin Newsom's veto of SB 1047, a pioneering artificial intelligence safety bill, marks a significant moment in the ongoing debate surrounding AI regulation and safety measures. The bill, introduced by Senator Scott Wiener, aimed to establish safety protocols for advanced AI models and hold developers accountable for potential harm or threats to public safety. Key provisions included requiring AI developers to submit safety plans to the state attorney general and implement mechanisms to shut down AI models in case of emergencies. The legislation garnered support from notable figures in the tech industry, including Elon Musk,...

read
Sep 30, 2024

AI safety bill vetoed by Newsom is a victory for tech giants

California Governor Vetoes Controversial AI Bill: Gavin Newsom has vetoed SB 1047, a high-profile artificial intelligence bill that faced significant opposition from Silicon Valley and tech industry leaders. The bill's key provisions: SB 1047 aimed to establish a new government agency to enforce compliance on developers of "covered models" - AI systems using a significant amount of computing power for training or fine-tuning. The bill would have imposed criminal penalties, including perjury charges, for non-compliance. It targeted AI models using 10^26 or 10^25 floating point operations (FLOPs) for training or fine-tuning, respectively. Opposition from tech industry: The bill faced widespread...

read
Sep 30, 2024

Gavin Newsom has rejected SB 1047 but the debate over AI safety is far from over

California Governor Vetoes Controversial AI Safety Bill: Governor Gavin Newsom has rejected a proposed legislation aimed at mitigating potential catastrophic risks associated with advanced artificial intelligence models, citing concerns over the bill's regulatory approach. SB 1047, the most contentious AI bill of the legislative session, sought to establish safeguards against the misuse of highly advanced AI systems for developing weapons of mass destruction. The bill garnered support from SAG-AFTRA and numerous Hollywood celebrities, who voiced concerns about AI's potential threats beyond the entertainment industry. Governor Newsom, while acknowledging the genuine issues addressed by the bill, expressed reservations about its regulatory...

read
Sep 30, 2024

California governor vetoes major AI regulation bill

California's AI bill veto: A setback for regulation efforts: Governor Gavin Newsom of California has vetoed S.B. 1047, a groundbreaking artificial intelligence safety bill that would have implemented strict regulations on the technology. The bill, which passed both houses of the California Legislature nearly unanimously, aimed to establish safety testing requirements for large AI systems before their public release. It would have granted the state's attorney general the authority to sue companies for serious harm caused by their AI technologies, including death or property damage. A mandatory kill switch for AI systems was included in the bill to address potential...

read
Sep 28, 2024

Inside China’s plan to make AI watermarks happen

China's ambitious AI watermarking initiative: China's Cyberspace Administration has drafted a new regulation aimed at clearly distinguishing between real and AI-generated content, marking a significant step in the global effort to manage the proliferation of artificial intelligence in media. The regulation, drafted on September 14, outlines a comprehensive approach to labeling AI-generated content, including explicit watermarks on images, notification labels on videos and virtual reality content, and morse code sounds for audio. Implicit labeling methods are also proposed, such as including "AIGC" (AI-Generated Content) mentions and encrypted information about content producers in metadata. The initiative goes beyond similar regulations, like...

read
Sep 27, 2024

China to require mandatory labeling of AI-generated content

China's ambitious AI content labeling initiative: The Chinese government has drafted a new regulation aimed at implementing mandatory labeling and tracking of AI-generated content, signaling a significant shift in the country's approach to artificial intelligence governance. The proposed regulation, drafted in March 2024, would require AI providers in China to add explicit labels and encrypted metadata to all AI-generated content. Social media companies operating in China would be obligated to scan for these watermarks and display appropriate labels on AI content shared on their platforms. The new rules would also mandate that social media platforms provide additional information to help...

read
Sep 27, 2024

California governor faces deadline on crucial AI safety bill

California's AI safety bill nears decision point: Governor Gavin Newsom faces a critical deadline to sign or veto SB 1047, a controversial piece of legislation aimed at regulating artificial intelligence in the state. The ticking clock: With the September 30th deadline looming, Newsom must weigh the arguments from both supporters and critics of the bill, which has sparked intense debate within the tech industry and beyond. The bill, known as SB 1047, has been the subject of intense scrutiny and discussion since its introduction. Governor Newsom's decision will have far-reaching implications for the future of AI development and regulation in...

read
Sep 27, 2024

Super Micro faces DOJ probe for allegedly circumventing Russia sanctions

Hindenburg report triggers DOJ investigation into Super Micro: A recent report by short-selling firm Hindenburg Research has prompted the U.S. Department of Justice to launch an investigation into Super Micro Computer, a company specializing in AI server manufacturing. The Hindenburg report, published online, leveled several serious allegations against Super Micro Computer, raising concerns about the company's business practices and compliance with international regulations. Among the key accusations are claims that Super Micro sold products to Russia in violation of sanctions, potentially circumventing international trade restrictions imposed on the country. The report also alleges accounting violations within the company, suggesting possible...

read
Sep 27, 2024

AI can alleviate healthcare workforce shortages, if regulation can catch up

AI's rapid rise in healthcare: Potential and challenges: Artificial intelligence is emerging as a promising solution to address the critical workforce crisis in the healthcare industry, but its rapid development has outpaced regulatory frameworks. The healthcare sector is experiencing a severe workforce crisis, with clinicians burning out at alarmingly high rates, making it a top priority for health systems across the board. AI tools have the potential to alleviate physicians' administrative burden by streamlining workflows, automating time-consuming tasks, and even aiding in clinical decision-making. However, these technologies are still in their infancy, with many developments occurring within the past two...

read
Sep 27, 2024

100+ companies have pledged compliance with the EU AI Act

Major tech players commit to EU AI regulations ahead of schedule: Over 100 companies, including industry giants like Google, Microsoft, Adobe, and Samsung, have pledged early compliance with the European Union's Artificial Intelligence Act, signaling a proactive approach to AI governance. The AI Act officially became law on August 1st, 2023, marking a significant milestone in regulating artificial intelligence technologies within the European Union. While some provisions of the Act, particularly those concerning "high risk" AI systems, are not set to be enforced until August 2027, many companies are voluntarily accelerating their compliance efforts. Notable signatories to the early compliance...

read
Sep 27, 2024

California passes 9 landmark AI content regulation bills

California takes legislative action on AI: Governor Gavin Newsom has signed nine bills into law, addressing various risks associated with AI-generated content, particularly deepfakes, while 29 more AI-related bills await his decision by September 30. The new legislation covers a range of issues, including protecting performers' digital likenesses, combating the misuse of AI-generated content, and addressing deepfakes in election campaigns. These laws represent a significant step in regulating AI technology and its potential impacts on society, entertainment, and politics. Protecting performers and their digital rights: Two of the signed bills focus on safeguarding actors and performers from unauthorized use of...

read
Sep 26, 2024

Apple snubs EU AI agreement backed by tech giants

Apple's AI stance in the EU: Apple has declined to join a new artificial intelligence pact in the European Union, setting itself apart from major tech competitors and potentially impacting its AI offerings in the region. The voluntary AI pact, aimed at accelerating measures to control artificial intelligence, has been signed by 115 companies, including tech giants like Google, Microsoft, and OpenAI. Apple, along with Meta, is one of the notable holdouts, raising questions about the company's AI strategy and its relationship with EU regulators. This decision comes amid Apple's ongoing disputes with EU governing authorities over various issues, including...

read
Sep 26, 2024

AI legal startup hit with $193,000 FTC fine in tech crackdown

AI company faces legal consequences: DoNotPay, a company claiming to offer the "world's first robot lawyer," has agreed to a $193,000 settlement with the Federal Trade Commission (FTC) for misleading consumers about its AI-powered legal services. The settlement is part of Operation AI Comply, a new FTC initiative aimed at cracking down on companies using AI to deceive or defraud customers. DoNotPay claimed its AI could replace human lawyers and generate valid legal documents, but the FTC found these claims were made without proper testing or evidence. The company allegedly told consumers they could use its AI service to sue...

read
Sep 25, 2024

FTC cracks down on DoNotPay, other companies for deceptive AI practices

FTC launches crackdown on AI-powered companies: The Federal Trade Commission has initiated "Operation AI Comply," targeting five companies accused of using artificial intelligence deceptively or harmfully. The FTC's action underscores its commitment to ensuring AI-marketed products and services provide real value and don't exploit consumers with false promises. The agency is taking legal and regulatory actions against companies found to be engaging in deceptive practices related to AI. Companies in the crosshairs: Five companies have been targeted by the FTC for alleged misuse of AI technology in their products and services. DoNotPay, which claimed to offer AI-powered legal services, has...

read
Sep 25, 2024

Half of Americans want AI outlawed in political ads — it won’t happen soon

The rapid advancement of artificial intelligence technology has created new challenges for political advertising and election integrity in the United States, highlighting the need for comprehensive legislative reform. Current landscape of AI misuse in politics: Artificial intelligence has been leveraged in various ways to manipulate public opinion and potentially influence electoral outcomes. The Republican National Committee released an "AI-generated look" ad depicting apocalyptic scenes if President Biden is re-elected, showcasing the potential for AI to create misleading visual content. Fake robocalls using AI-generated voices impersonating President Biden urged New Hampshire residents not to vote in the 2024 primary, demonstrating the...

read
Sep 24, 2024

Entertainment Leaders Pen Letter Urging Newsom to Sign AI Safety Bill

Hollywood rallies for AI safety legislation: More than 125 entertainment industry leaders have signed a letter urging California Governor Gavin Newsom to sign a bill requiring advanced AI developers to implement safety measures. The bill, SB 1047, introduced by Senator Scott Wiener, would mandate that AI developers share safety plans with the state's attorney general and have mechanisms to shut down AI models if they pose a threat to public safety. Signatories include prominent figures such as J.J. Abrams, Shonda Rhimes, Judd Apatow, Ava DuVernay, Mark Hamill, Jane Fonda, and SAG-AFTRA leaders Fran Drescher and Duncan Crabtree-Ireland. The letter emphasizes...

read
Load More