News/Regulation
How to Balance Innovation Incentives and Regulation in AI
The AI regulation dilemma: Balancing market innovation and regulatory oversight in the rapidly evolving field of artificial intelligence presents significant challenges for policymakers and industry leaders. A recent report from Brookings investigates how to strike this balance effectively. The report highlights the complexity of regulating new and fast-changing technologies like AI, both from theoretical and empirical perspectives. Market forces and regulation jointly shape the direction of technological development, rather than regulation alone driving innovation. There is a general tendency for innovation to be under-incentivized, and overly stringent AI regulations could potentially worsen underinvestment and stifle experimentation in the field. Market...
read Sep 23, 2024LinkedIn AI Backlash Highlights Need for EU-Like Privacy Protections
LinkedIn's AI training sparks privacy concerns: LinkedIn's decision to use user data for training its AI tools has ignited a debate about data privacy and user consent in the tech industry. The professional networking platform has begun using member data to improve its AI capabilities, a move that has drawn criticism from users concerned about privacy and transparency. This decision follows similar actions by other tech giants like Meta (Facebook, Instagram) and X (Twitter), who have also leveraged user data for AI development. Notably, LinkedIn has excluded users in the European Union, European Economic Area, and Switzerland from this data...
read Sep 23, 2024Preventing Imperialism and Monopoly In The AI Era
The dawn of AI governance: As artificial intelligence becomes a significant driver of economic growth, there's a pressing need to prevent the concentration of power in the tech sector from leading to a new form of digital dominance. The rise of AI presents both opportunities and challenges for society, employees, customers, and organizations. There are growing concerns about the potential for unchecked corporate power in the digital age, reminiscent of historical monopolistic practices. Historical parallels and modern implications: Today's tech giants, such as Google, Facebook, and Amazon, wield power in ways that echo the monopolistic practices of past corporate entities...
read Sep 23, 2024UN Issues ‘Governing AI for Humanity’ Report — Here’s What It Says
The UN's call for a paradigm shift in AI governance: The United Nations High-level Advisory Body on Artificial Intelligence has released a report titled "Governing AI for Humanity," advocating for a fundamental change in how AI is governed globally, with a focus on ethics, human rights, and global equity. The report, released in September 2024, emphasizes the need for AI governance to prioritize societal well-being over profit. It highlights the risks associated with the rapid expansion of AI, including the potential to exacerbate social inequalities and amplify disinformation. The report's recommendations are aimed at addressing the fragmentation of current AI...
read Sep 23, 2024California AI Bill Faces Crucial Decision as Governor Weighs Options
California's AI legislation at a crossroads: Governor Newsom faces a critical decision on Senate Bill 1047, also known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," which aims to establish comprehensive AI governance constraints. The bill, currently awaiting action on Governor Newsom's desk, must be addressed by September 30, 2024, with options to sign, veto, or allow it to become law through a pocket signature. SB 1047 has garnered significant attention nationwide due to its potential to be the first of its kind in AI legislation. Key arguments for and against SB 1047: The proposed legislation...
read Sep 21, 2024Why Congress Must Act Now to Protect Elections from AI
AI's growing impact on elections: Artificial intelligence is increasingly influencing electoral processes, creating new opportunities for voter manipulation and deception. The 2024 general election is approaching, and AI is already playing a significant role in shaping political communication and voter perceptions. Bad actors can use AI to create convincing "deepfakes" - deceptive audio or visual content that portrays false or distorted realities, potentially misleading voters about candidates' actions or statements. Recent examples of AI misuse in elections: Several incidents have demonstrated the potential for AI to be used maliciously in electoral contexts. In New Hampshire's 2024 presidential primary, AI-generated robocalls...
read Sep 20, 2024The Brains and Brawn of AI Models and How to Understand Their Output
Recent insights from a talk by Devavrat Shah shed light on conceptual frameworks for understanding and regulating artificial intelligence systems. The mind and muscle of AI: Cognitive output, whether from humans or AI, can be viewed as a combination of learning capability (mind) and mechanistic automation (muscle). The 'mind' component represents the learning aspect, involving data interpretation and logical reasoning. The 'muscle' refers to the brute-force application of assessment to data, or what Shah terms 'mechanistic automation'. This conceptual framework helps in distinguishing between AI systems that simply process large amounts of data and those that demonstrate more sophisticated learning...
read Sep 20, 2024AI Could Become Dominated By a Few Multinationals, UN Warns
AI governance takes center stage: The United Nations' High-Level Advisory Body on Artificial Intelligence has released its first report, outlining seven key recommendations to address the risks and governance challenges associated with AI technology. The report, titled "Governing AI for humanity," emphasizes the need for a global dialogue on AI governance, highlighting that the European Union's AI Act is one of the few existing regulatory frameworks in this space. A primary concern raised in the report is the potential for AI technology to be controlled by a small number of multinational corporations, potentially leading to its imposition on people without...
read Sep 19, 2024Opinion: Making SB 1047 a Law is the Best Way to Improve It
The legislative landscape: California Senate Bill 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish groundbreaking regulations for AI technology developers in the state. The bill requires AI companies to integrate safeguards into their "covered models" during development and deployment phases. It empowers the California attorney general to pursue civil actions against parties failing to take "reasonable care" in preventing catastrophic harms or enabling emergency shutdowns. SB 1047 represents a significant step towards regulating the rapidly evolving AI industry, potentially setting a precedent for other states and countries. Industry opposition and concerns:...
read Sep 19, 2024Democratic Senators Urge FCC to Pass Proposal for AI Political Ads
FCC considers AI disclosure rule for political ads: The Federal Communications Commission is weighing a proposal to require broadcasters to disclose the use of artificial intelligence in political advertisements as the 2024 presidential election approaches. Democratic senators, led by Sen. Ben Ray Luján of New Mexico, have urged the FCC to adopt the proposed rules, emphasizing the urgency given the proximity of the election and the fact that some states have already begun early voting. The proposal, introduced by FCC Chairwoman Jessica Rosenworcel in July, would mandate broadcasters to inquire about AI-generated content in political ads and disclose this information...
read Sep 19, 2024Adult Industry Professionals are Demanding a Say in AI Rules
Adult industry professionals seek voice in AI regulation: A coalition of sex industry professionals and advocates has issued an open letter to EU regulators, asserting that their perspectives are being overlooked in crucial discussions on AI regulation. The group, known as Open Mind AI, includes a diverse range of stakeholders such as sex workers, erotic filmmakers, sex tech enterprises, and sex educators. They are urging the European Commission to include their voices in future negotiations that will shape AI regulations, emphasizing the importance of their unique insights. Notable participants in this initiative include Erika Lust's company and the European Sex...
read Sep 19, 2024The UN is Treating AI with the Same Urgency as Climate Change
AI governance takes center stage at UN: The United Nations has released a report proposing a global effort to monitor and govern artificial intelligence, signaling a shift towards treating AI with the same urgency as climate change. The report, produced by the UN secretary general's High Level Advisory Body on AI, recommends creating a body similar to the Intergovernmental Panel on Climate Change to gather up-to-date information on AI and its risks. A new policy dialog on AI is proposed to allow the UN's 193 members to discuss risks and agree upon actions. The report emphasizes empowering poorer nations, especially...
read Sep 18, 2024California’s New Laws Could Kill Election Deepfakes in Your Social Feeds
California takes legislative action against AI-generated election misinformation: Governor Gavin Newsom has signed three bills into law aimed at combating the spread of deepfakes and AI-generated content during elections, positioning the state as a leader in AI regulation. Key provisions of AB 2839: This bill expands existing protections against deceptive AI-generated election materials and introduces stricter timelines for enforcement. Distribution of AI-generated election content is prohibited within 120 days before an election and 60 days after. Officials and candidates are granted the right to sue in order to prevent the distribution of such materials. The bill is set to take...
read Sep 17, 2024AI Critic Gary Marcus Warns of Silicon Valley’s Moral Decline
Generative AI's rapid rise has sparked concerns about its societal impact and the ethical implications of Silicon Valley's push for artificial general intelligence (AGI). The big picture: Gary Marcus, NYU professor emeritus and AI critic, argues that Silicon Valley's moral decline and focus on short-term gains have led to the development of flawed generative AI systems with potentially dire consequences. Marcus's new book, "Taming Silicon Valley: How We Can Ensure That AI Works for Us," highlights the immediate threats posed by current generative AI technology, including political disinformation, market manipulation, and cybersecurity risks. The author traces this shift in Silicon...
read Sep 17, 2024California Enacts Landmark AI Safeguards for Performers
California takes lead on AI regulation in entertainment: Governor Gavin Newsom has signed two bills into law, backed by SAG-AFTRA, to regulate the use of artificial intelligence-generated performances in the entertainment industry. Key provisions of the new legislation: The bills aim to protect performers' rights and consent in the rapidly evolving landscape of AI-generated content. AB 2602 requires contracts for AI performances to clearly state the intended use, preventing broad likeness rights from automatically granting permission for AI replicas. AB 1836 extends these protections to deceased performers, giving their estates the right to consent to AI replicas for 70 years...
read Sep 17, 2024Bipartisan Bill Targets AI Misinformation in Political Campaigns
AI regulation in political campaigns: Lawmakers are introducing bipartisan legislation to prohibit the use of artificial intelligence for misrepresenting political opponents in campaigns and political advertising. The bill aims to give the Federal Election Commission (FEC) the authority to regulate AI use in elections, similar to how it has regulated other forms of political misrepresentation for decades. This legislation comes as Congress has struggled to keep pace with regulating rapidly evolving AI technology, raising concerns about its potential to overwhelm voters with misinformation. Experts have expressed particular worry about the dangers posed by "deepfakes," which are AI-generated videos and memes...
read Sep 16, 2024The Benefits Generative AI Can Bring to Fraud Management and AML
Generative AI revolutionizes fraud management and anti-money laundering: The integration of generative AI (genAI) into fraud management and anti-money laundering (FRAML) initiatives is transforming the landscape of financial security, offering both new challenges and powerful solutions. Countering advanced fraud techniques: As fraudsters leverage genAI to create sophisticated fake IDs and deepfakes, financial institutions are compelled to adopt equally advanced defensive measures. Deepfake detection technologies powered by genAI are becoming increasingly sophisticated, employing methods such as spectral video analysis and behavioral biometrics. These advanced detection techniques are crucial in identifying and preventing fraud attempts that use AI-generated content to deceive traditional...
read Sep 16, 2024Stephen Fry’s Latest Take on How to Live Well In the AI Era
The AI revolution and its implications: Stephen Fry, renowned author and technology commentator, offers a compelling perspective on the rapid advancement of artificial intelligence and its potential to reshape society fundamentally. Fry characterizes AI as part of a larger technological convergence, including quantum computing, genomics, and robotics, that he likens to a "tsunami" poised to dramatically alter our world. The author draws parallels between the current AI revolution and past technological shifts, highlighting humanity's historical difficulty in accurately predicting the societal impacts of new innovations. Technological progress and unforeseen consequences: The rapid evolution of AI capabilities has been driven primarily...
read Sep 16, 2024Leading Scientists Call for Protections Against Catastrophic AI Risks
AI safety concerns gain urgency: Leading AI scientists are calling for a global oversight system to address potential catastrophic risks posed by rapidly advancing artificial intelligence technology. The release of ChatGPT and similar AI services capable of generating text and images on command has demonstrated the powerful capabilities of modern AI systems. AI technology has quickly moved from the fringes of science to widespread use in smartphones, cars, and classrooms, prompting governments worldwide to grapple with regulation and utilization. A group of influential AI scientists has issued a statement warning that AI could surpass human capabilities within years, potentially leading...
read Sep 14, 2024OpenAI’s New o-1 Model is Raising Ethical Concerns for its Ability to Deceive
Advancing AI capabilities while grappling with safety concerns: OpenAI's latest AI system, o1 (nicknamed Strawberry), showcases improved reasoning abilities but also raises significant safety and ethical concerns. Key features of Strawberry: The new AI system demonstrates enhanced cognitive capabilities, positioning it as a significant advancement in artificial intelligence. Strawberry is designed to "think" or "reason" before responding, allowing it to solve complex logic puzzles, excel in mathematics, and write code. The system employs "chain-of-thought reasoning," which enables researchers to observe and analyze its thinking process. OpenAI claims that these reasoning capabilities can potentially make AI safer by allowing it to...
read Sep 14, 2024G20 Nations Agree on AI Guidelines to Combat Disinformation
G20 leaders unite against disinformation and set AI guidelines: The Group of 20 nations have reached a landmark agreement to combat disinformation and establish guidelines for artificial intelligence development, marking a significant step in addressing global digital challenges. Key takeaways from the G20 meeting: For the first time in G20 history, the group has officially recognized the problem of disinformation and called for transparency and accountability from digital platforms. The ministers agreed to set up guidelines for developing AI, emphasizing ethical, transparent, and accountable use with human oversight. The agreement aims to ensure compliance with privacy and human rights laws...
read Sep 13, 2024OpenAI’s New o-1 Model Is Already Sparking Safety Concerns
Groundbreaking AI model raises safety concerns: OpenAI's new o1-preview model, designed for enhanced reasoning capabilities, has sparked warnings from AI experts about potential risks associated with increasingly capable artificial intelligence systems. OpenAI's o1-preview model, codenamed 'Project Strawberry', is now available for ChatGPT Pro subscribers and through the company's API. The model demonstrates significant improvements in problem-solving abilities across various fields, including mathematics, coding, and scientific disciplines. OpenAI also introduced o1-mini, a faster and more affordable version of the reasoning model, particularly effective for coding applications. Performance benchmarks: The new o1-preview model has shown remarkable improvements in various challenging tasks, outperforming...
read Sep 13, 2024What To Know About The EU AI Act
EU Artificial Intelligence Act sets new standards for AI regulation: The European Union has introduced a comprehensive legislation aimed at ensuring safe, trustworthy, and human-centric use of AI technologies across various sectors. The EU AI Act has a broad extraterritorial reach, applying to entities operating in or supplying AI systems to the EU, regardless of their headquarters location. Different obligations are established for various actors in the AI value chain, including GPAI model providers, deployers, manufacturers, and importers. The legislation adopts a risk-based approach, with higher-risk use cases subject to more stringent requirements and enforcement. Compliance and penalties: The Act...
read Sep 11, 2024Yann LeCun and Geoffrey Hinton Clash Over AI Safety Bill SB 1047
AI safety debate intensifies: California's AI safety bill SB 1047 has sparked a fierce debate among AI pioneers, with Yann LeCun and Geoffrey Hinton taking opposing stances on the legislation. Yann LeCun, Meta's chief AI scientist, publicly criticized supporters of SB 1047, arguing they have a "distorted view" of AI's near-term capabilities. Geoffrey Hinton, often called the "godfather of AI," endorsed the bill by signing an open letter urging Governor Gavin Newsom to approve the legislation. The disagreement between these two influential figures highlights the deep divisions within the AI community regarding regulation and safety measures. Key provisions of SB...
read