News/Regulation

Sep 11, 2024

SB 1047: Will California Determine the Course of the Entire AI Industry?

California's AI regulation push: California's SB 1047 bill, aimed at regulating advanced AI models, has passed the state legislature and now awaits Governor Gavin Newsom's decision, potentially setting a new standard for AI regulation in the US. The bill, introduced by state Senator Scott Wiener, seeks to implement strict safety measures for powerful AI models, including thorough testing and safety certifications. If signed into law, SB 1047 would apply to AI models operating in California's market, potentially impacting the industry far beyond state borders. Industry reactions and competitive landscape: The proposed legislation has sparked intense debate within the tech industry,...

read
Sep 11, 2024

AI Governance and the Evolving Landscape of Consumer Values

AI governance emerges as a critical focus: As artificial intelligence continues to advance rapidly, the need for comprehensive governance frameworks becomes increasingly important to ensure responsible and ethical development and deployment of AI technologies. The concept of AI governance builds upon established principles of data governance, which have been crucial in addressing privacy concerns and data ownership issues in the big data era. AI governance aims to provide oversight and guidelines for AI products and services, similar to how data governance has been instrumental in managing data-related challenges. Principle-based approach gains traction: Experts advocate for a more flexible and agile...

read
Sep 10, 2024

UK Report Uncovers AI Risks and Calls for Global Cooperation

The UK's Department for Science, Innovation, and Technology has released an interim report on advanced AI safety, highlighting current capabilities, potential risks, and mitigation strategies while emphasizing the need for global cooperation in addressing AI challenges. Report overview and significance: The International Scientific Report on the Safety of Advanced AI – Interim Report provides a comprehensive examination of the current state and future potential of artificial intelligence systems, with a focus on safety and risk assessment. The report delves into the capabilities of current AI systems, evaluates general-purpose AI, and explores potential risks associated with advanced AI technologies. It emphasizes...

read
Sep 10, 2024

US Proposes Mandatory Reporting for Advanced AI Developers

New AI reporting requirements proposed by US Commerce Department: The Bureau of Industry and Security (BIS) plans to introduce mandatory reporting for developers of advanced AI models and cloud computing providers, aiming to bolster national security and defense. The proposed rules would require companies to report on development activities, cybersecurity measures, and results from red-teaming tests. These tests assess risks such as AI systems aiding cyberattacks or enabling non-experts to create chemical, biological, radiological, or nuclear weapons. Commerce Secretary Gina M. Raimondo emphasized the importance of keeping pace with AI technology developments for national security purposes. Global context of AI...

read
Sep 9, 2024

The Complex Web Comprising The U.S. Government’s Approach to AI

AI governance landscape in the US federal government: The United States has a complex network of federal agencies and departments involved in various aspects of artificial intelligence policy, research, and regulation. The Department of Commerce plays a crucial role through its sub-agencies, including the National Institute of Standards and Technology (NIST), which develops AI standards and frameworks, and the Bureau of Industry and Security (BIS), which regulates the export of AI technologies. The US Patent and Trademark Office (USPTO), also under the Department of Commerce, handles AI-related patents, reflecting the growing importance of AI in intellectual property. Department of Energy's...

read
Sep 9, 2024

What to Know about Grok’s New Updates and How They Affect Your Privacy

Grok AI emerges as a controversial AI assistant: Elon Musk's xAI has launched Grok, a new AI assistant that promises a unique blend of humor and rebellion, setting it apart from its more constrained competitors. Grok is designed with fewer restrictions than other AI assistants, which has led to concerns about its propensity for hallucinations, bias, and potential for spreading misinformation. The AI's integration with X (formerly Twitter) has raised eyebrows, particularly due to its automatic opt-in policy for using users' posts as training data. Grok-2, the latest iteration, introduces image generation capabilities that have sparked worries about the ease...

read
Sep 8, 2024

AI Lobbyists Flood Washington as Tech Policy Debates Heat Up

AI industry ramps up lobbying efforts: The artificial intelligence sector has significantly increased its presence in Washington, aiming to shape potential government regulations and policies. The number of organizations lobbying on AI issues grew by over 190% from 2022 to 2023, reaching 460, with a slight increase to 462 in 2024, according to Open Secrets. Major players involved in lobbying efforts include the Chamber of Commerce, Business Roundtable, Microsoft, Intuit, and Amazon. The primary objective of these lobbyists is to convince lawmakers that fears surrounding AI are exaggerated and that the United States does not require stringent regulations similar to...

read
Sep 6, 2024

What the Latest International AI Treaty Means for US Tech Giants

International AI treaty signed, impact on US tech industry unclear: The United States, European Union, and United Kingdom have signed the first legally binding international AI treaty, known as the AI Convention, developed by the Council of Europe. The treaty aims to address risks posed by artificial intelligence while promoting responsible innovation. It focuses on protecting human rights, democracy, and the rule of law in relation to AI systems. Key principles include protecting human dignity, individual autonomy, equality, non-discrimination, privacy, and personal data protection. Limited impact on US tech companies: Experts suggest that the treaty's effect on US tech companies...

read
Sep 6, 2024

15 Nations Sign First Legally Binding AI Treaty

Groundbreaking AI treaty signed: The United States, United Kingdom, and European Union have taken a significant step towards regulating artificial intelligence by signing the first "legally binding" AI treaty. The Framework Convention on Artificial Intelligence aims to ensure AI systems align with human rights, democratic principles, and the rule of law. Key principles outlined in the treaty include protecting user data, respecting legal frameworks, and maintaining transparency in AI practices. Signatories are required to implement or maintain appropriate legislative, administrative, or other measures to reflect the framework's guidelines. Expanding international cooperation: The treaty's reach extends beyond major global powers, with...

read
Sep 5, 2024

AI Deepfake Laws Are Struggling to Keep Pace with Technology

The deepfake dilemma: States across the U.S. are taking the lead in addressing the growing threat of nonconsensual deepfake pornography, as federal legislation lags behind the rapid pace of technological advancement. A total of 39 states have introduced laws targeting nonconsensual deepfakes, with 23 successfully passing legislation, 4 bills pending, and 9 proposals struck down. The urgency to act is driven by the widespread impact of deepfake technology, affecting both celebrities and ordinary individuals. An estimated 90% of deepfake videos are pornographic in nature, with the majority being nonconsensual content featuring women. Federal efforts and local initiatives: While federal legislation...

read
Sep 5, 2024

How to Regulate Generative AI to Benefit the Healthcare Industry

The rise of generative AI in medicine: Generative AI's emergence in healthcare poses unique regulatory challenges for the Food and Drug Administration (FDA) and global regulators, requiring a novel approach distinct from traditional drug and device regulation. The FDA's usual process of reviewing new drugs and devices for safety and efficacy before market entry is not suitable for generative AI applications in healthcare. Regulators need to conceptualize large language models (LLMs) as novel forms of intelligence, necessitating an approach more akin to how clinicians are regulated. This new regulatory framework is crucial for maximizing the clinical benefits of generative AI...

read
Sep 5, 2024

How Powerful Must AI Be To Be Dangerous? Regulators Did The Math To Find Out

AI regulation embraces mathematical metrics: Governments are turning to computational power measurements to identify potentially dangerous AI systems that require oversight. The U.S. government and California are using a threshold of 10^26 floating-point operations per second (flops) to determine which AI models need reporting or regulation. This equates to 100 septillion calculations per second, a level of computing power that some lawmakers and AI safety advocates believe could enable AI to create weapons of mass destruction or conduct catastrophic cyberattacks. California's proposed legislation adds an additional criterion, requiring regulated AI models to also cost at least $100 million to build....

read
Sep 4, 2024

Deepfakes are Posing a Growing Threat to India’s Financial Sector

The rising threat of deepfakes in India's financial sector: Deepfake technology is emerging as a significant concern for India's financial services industry (FSI), blurring the lines between authentic and fabricated content and potentially undermining trust in financial systems. A 2022 incident involving a deepfake audio of a Mumbai energy company CEO caused temporary stock price fluctuations, highlighting the rapid and tangible impact of such technologies on market stability. Financial sector leaders are increasingly worried about the potential for deepfakes to impersonate business executives and spread false information, which could have far-reaching consequences for market dynamics and investor confidence. The finance...

read
Sep 4, 2024

California Mandates Consent for AI Deepfakes of Deceased Stars

California's AI deepfake legislation: The California state Senate has passed AB 1836, a bill requiring explicit consent from the estates of deceased performers for the creation of AI replicas in various media projects. The bill covers all forms of digital recreation using AI, including still images, voice clones, and full character portrayals in films. Producers must obtain agreement from the estate or legal representative of the deceased performer before using their AI replica. This legislation follows the recent passage of AB 2602, which focuses on consent requirements for AI replicas of living performers. Industry support and implications: SAG-AFTRA, the union...

read
Sep 4, 2024

The Latest News on SB 1047, California’s Attempt to Govern Artificial Intelligence

California takes bold step towards AI regulation: The California legislature has passed SB 1047, a groundbreaking bill aimed at governing artificial intelligence systems, particularly focusing on the potential risks associated with foundation AI models. Key provisions of SB 1047: The bill introduces comprehensive AI safety requirements for companies operating in California, addressing concerns about the existential risks posed by advanced AI systems. Companies must implement precautions before training sophisticated foundation models, including the ability to quickly shut down the model if necessary. The legislation mandates protection against "unsafe post-training modifications" to AI models. A testing procedure must be established to...

read
Sep 4, 2024

Why Google’s $172M Deal With California May Fail to Solve the Journalism Crisis

Google's deal with California: A stopgap measure for journalism support: Google has agreed to commit over $172.5 million to journalism and artificial intelligence initiatives in California, highlighting the ongoing challenges faced by the news industry in the digital age. The agreement, negotiated with California Assembly member Buffy Wicks, replaces more complex regulatory and "link tax" legislation that Google had opposed. California is also contributing $70 million to journalism initiatives as part of the deal. This arrangement may set a precedent for similar agreements between tech giants and other states. Shortcomings of the current approach: The deal between Google and California...

read
Sep 4, 2024

UK Regulator Approves Microsoft’s AI Talent Acquisition

Regulatory decision on tech industry merger: The UK's Competition and Markets Authority (CMA) has concluded its antitrust investigation into Microsoft's hiring of Inflection AI staff, determining that the transaction does not pose significant competitive concerns. Joel Bamford, executive director of the CMA, confirmed that while the hiring of Inflection AI employees by Microsoft qualifies as a merger under UK law, it is unlikely to substantially reduce competition in the consumer chatbot market. The CMA's decision to close the probe suggests that the regulatory body does not view this particular transaction as a threat to fair competition in the rapidly evolving...

read
Sep 4, 2024

California Lawmakers Approve Wave of AI Regulations to Combat Deepfakes and Worker Replacement

California Takes Bold Steps to Regulate AI: The California Legislature has approved a series of bills aimed at regulating artificial intelligence, addressing concerns ranging from deepfakes to worker protection and AI literacy. Key legislative actions: Lawmakers have passed several bills to combat deepfakes, set safety guardrails for AI models, protect workers from AI replacement, and promote AI literacy in education. The proposed legislation targets various aspects of AI, including election interference, child protection, and worker rights. These bills now await Governor Gavin Newsom's decision, with a September 30 deadline for his action. Newsom has previously expressed caution about overregulation of...

read
Sep 3, 2024

Dutch Regulators Slam Clearview AI with $33M Fine for Privacy Breaches

Facial recognition controversy: Clearview AI, a facial recognition technology company, faces a substantial fine of approximately $33 million from the Dutch Data Protection Authority (DPA) for violating privacy regulations. The DPA's investigation revealed that Clearview AI constructed an illegal database containing billions of facial images by indiscriminately scraping the internet without obtaining consent, including photographs of individuals in the Netherlands. The company's database reportedly houses over 40 billion facial images collected globally without geographical restrictions, raising significant privacy concerns. Clearview AI's technology enables users to upload a photo and search for matching images across the internet, potentially allowing for detailed...

read
Aug 28, 2024

AI Deepfakes Spark Urgent Call for FEC Election Regulations

Congressional push for AI regulation: A group of Democratic lawmakers is urging the Federal Election Commission (FEC) to strengthen regulations on AI-generated deepfakes, particularly in light of the recent controversy surrounding X's chatbot Grok. Rep. Shontel Brown (D-Ohio) and several colleagues have written to the FEC, seeking clarification on whether AI-generated deepfakes of election candidates fall under the category of "fraudulent misrepresentation." The lawmakers are backing a July 2023 petition by Public Citizen that calls for the FEC to propose rules governing the use of deceptive AI in political campaigns. This initiative comes in response to growing concerns about the...

read
Aug 28, 2024

AI Reshapes Political Ads as 16 States Adopt New Laws

AI's growing role in political advertising: Recent advancements in generative artificial intelligence (AI) technologies have prompted lawmakers and regulators to address potential risks in the context of political advertising, particularly as the 2024 election cycle approaches. State and federal efforts are underway to regulate the use of AI in political ads, with a focus on transparency and preventing the spread of misinformation. As of August 2024, 16 states have adopted laws governing AI-generated content in political advertising, while another 16 states have bills under consideration. The Federal Communications Commission (FCC) has proposed a new rule requiring television and radio broadcast...

read
Aug 27, 2024

Experts Weigh In On Challenges of Implementing AI Safety

The evolving landscape of AI safety concerns: The AI safety community has experienced significant growth and increased public attention, particularly following the release of ChatGPT in November 2022. Helen Toner, a key figure in the AI safety field, notes that the community has expanded from about 50 people in 2016 to hundreds or thousands today. The release of ChatGPT in late 2022 brought AI safety concerns to the forefront of public discourse, with experts gaining unprecedented media attention and influence. Public interest in AI safety issues has since waned, with ChatGPT becoming a routine part of digital life and initial...

read
Aug 27, 2024

Elon Musk Backs California AI Safety Testing Bill

Elon Musk advocates for AI regulation in California: The Tesla CEO and owner of social media platform X has expressed support for a California bill that would require tech companies and AI developers to conduct safety testing on certain AI models. Musk stated on X that he has been advocating for AI regulation for over 20 years, emphasizing the need to regulate any product or technology that poses potential risks to the public. The bill in question, SB 1047, is one of 65 AI-related bills introduced by California state lawmakers this legislative season, according to the state's legislative database. Many...

read
Aug 26, 2024

Tech Giants Withhold AI Products from Europe Amid Regulatory Clash

Silicon Valley's strategic response to European tech regulations: Major U.S. technology companies are withholding key artificial intelligence products from the European market, signaling a growing tension between innovation and regulatory compliance. Meta and Apple have decided not to launch certain AI products in Europe, citing the region's regulatory environment as the primary reason for their decision. This move is being interpreted as a form of protest against Europe's tech rules, which these companies may view as overly restrictive or burdensome. The strategy of withholding products from specific markets due to regulatory concerns is not unprecedented in the tech industry, suggesting...

read
Load More