News/AI Safety
15 Nations Sign First Legally Binding AI Treaty
Groundbreaking AI treaty signed: The United States, United Kingdom, and European Union have taken a significant step towards regulating artificial intelligence by signing the first "legally binding" AI treaty. The Framework Convention on Artificial Intelligence aims to ensure AI systems align with human rights, democratic principles, and the rule of law. Key principles outlined in the treaty include protecting user data, respecting legal frameworks, and maintaining transparency in AI practices. Signatories are required to implement or maintain appropriate legislative, administrative, or other measures to reflect the framework's guidelines. Expanding international cooperation: The treaty's reach extends beyond major global powers, with...
read Sep 5, 2024How to Regulate Generative AI to Benefit the Healthcare Industry
The rise of generative AI in medicine: Generative AI's emergence in healthcare poses unique regulatory challenges for the Food and Drug Administration (FDA) and global regulators, requiring a novel approach distinct from traditional drug and device regulation. The FDA's usual process of reviewing new drugs and devices for safety and efficacy before market entry is not suitable for generative AI applications in healthcare. Regulators need to conceptualize large language models (LLMs) as novel forms of intelligence, necessitating an approach more akin to how clinicians are regulated. This new regulatory framework is crucial for maximizing the clinical benefits of generative AI...
read Sep 5, 2024How Powerful Must AI Be To Be Dangerous? Regulators Did The Math To Find Out
AI regulation embraces mathematical metrics: Governments are turning to computational power measurements to identify potentially dangerous AI systems that require oversight. The U.S. government and California are using a threshold of 10^26 floating-point operations per second (flops) to determine which AI models need reporting or regulation. This equates to 100 septillion calculations per second, a level of computing power that some lawmakers and AI safety advocates believe could enable AI to create weapons of mass destruction or conduct catastrophic cyberattacks. California's proposed legislation adds an additional criterion, requiring regulated AI models to also cost at least $100 million to build....
read Sep 5, 2024OpenAI Co-Founder Secures $1B for New AI Safety Venture
OpenAI co-founder launches rival AI venture: Ilya Sutskever, former chief scientist at OpenAI, has secured $1 billion in funding for his new artificial intelligence company, Safe Superintelligence (SSI), aimed at developing advanced AI systems with a focus on safety. Funding details and investors: The substantial investment in SSI comes from notable venture capital firms, highlighting the growing interest in AI safety and development. Andreessen Horowitz (a16z), a prominent VC firm known for its stance against California's AI safety bill, is among the investors backing SSI. Sequoia Capital, which has also invested in OpenAI, has contributed to the funding round, demonstrating...
read Sep 4, 2024US Government Partners with OpenAI and Anthropic for AI Safety Testing
AI safety collaboration takes center stage: OpenAI and Anthropic have entered into groundbreaking agreements with the US government, granting early access to their latest AI models for safety testing before public release. The National Institute of Standards and Technology (NIST) announced formal collaborations with both companies and the US Artificial Intelligence Safety Institute to conduct AI safety research, testing, and evaluation. This partnership aims to ensure that public safety assessments are not solely dependent on the companies' internal evaluations but also include collaborative research with the US government. The US AI Safety Institute will work in conjunction with its UK...
read Sep 4, 2024AI Disinformation Detection Tools Are Falling Short in Global South
The global challenge of AI-generated content detection: Current AI detection tools are failing to effectively identify artificially generated media in many parts of the world, particularly in the Global South, raising concerns about the spread of disinformation and its impact on democratic processes. As generative AI becomes increasingly utilized for political purposes worldwide, the ability to detect AI-generated content has become crucial for maintaining the integrity of information ecosystems. Most existing detection tools operate with only 85-90% confidence rates in identifying AI-generated content, with this accuracy dropping significantly when applied to content from non-Western countries. The limitations of these tools...
read Sep 4, 2024‘Artificial Integrity’ Emerges as Key to Ethical Machine Learning
The rise of Artificial Integrity: Artificial Integrity emerges as a crucial paradigm in AI development, emphasizing the need for AI systems to operate in alignment with human values and ethical principles. Artificial Integrity is described as a built-in capability within AI systems that ensures they function not just efficiently, but also with integrity, respecting human values from the outset. This new approach prioritizes integrity over raw intelligence, aiming to address the ethical challenges posed by rapidly advancing AI technologies. The concept applies to various modes of AI operation, including Marginal, AI-First, Human-First, and Fusion Modes. Understanding Artificial Integrity: Artificial Integrity...
read Sep 4, 2024The Latest News on SB 1047, California’s Attempt to Govern Artificial Intelligence
California takes bold step towards AI regulation: The California legislature has passed SB 1047, a groundbreaking bill aimed at governing artificial intelligence systems, particularly focusing on the potential risks associated with foundation AI models. Key provisions of SB 1047: The bill introduces comprehensive AI safety requirements for companies operating in California, addressing concerns about the existential risks posed by advanced AI systems. Companies must implement precautions before training sophisticated foundation models, including the ability to quickly shut down the model if necessary. The legislation mandates protection against "unsafe post-training modifications" to AI models. A testing procedure must be established to...
read Sep 1, 2024AI Safety Strategies Gain Traction as Content Surges
The growing importance of responsible AI content management: As AI-generated content becomes more prevalent, creators and platform owners face increasing pressure to ensure safe and appropriate use of their technologies. The blog post discusses the challenges of managing AI-generated content and offers practical advice for creating safer digital spaces. The author shares a personal experience where their AI model produced inappropriate content, highlighting the need for proactive measures to prevent misuse. Key strategies for safer AI spaces: The blog outlines several approaches to mitigate risks associated with AI-generated content and foster responsible use. Utilizing AI classifiers to filter out harmful...
read Aug 30, 2024ChatGPT Reaches 200M Users as AI Adoption Soars
ChatGPT's explosive growth: OpenAI's ChatGPT has reached a milestone of over 200 million weekly active users, doubling its user base since November 2023. This significant growth comes amid increasing competition in the AI chatbot market from tech giants like Meta, Google, and emerging players like Anthropic. The rapid adoption of ChatGPT demonstrates the growing mainstream acceptance and integration of AI language models in various sectors. Enterprise adoption and API usage: OpenAI's products have gained substantial traction in the corporate world, with widespread implementation across Fortune 500 companies. An impressive 92% of Fortune 500 companies are now utilizing OpenAI's products, showcasing...
read Aug 27, 2024AI’s Ambiguous Role in Addressing Healthcare Disparities
AI's dual potential in healthcare equity: Artificial intelligence is poised to transform healthcare, but its impact on reducing health disparities remains uncertain, with both promising applications and concerning risks. Current state of health disparities: Significant inequities persist in the U.S. healthcare system, particularly affecting communities of color and underserved populations. African Americans and other minority groups continue to face severe health disparities, with many reporting experiences of discrimination and mistrust in healthcare settings. Latino communities encounter obstacles in obtaining quality care, often due to language barriers and lack of insurance, leading to disproportionate rates of certain diseases. Despite targeted solutions...
read Aug 27, 2024Experts Weigh In On Challenges of Implementing AI Safety
The evolving landscape of AI safety concerns: The AI safety community has experienced significant growth and increased public attention, particularly following the release of ChatGPT in November 2022. Helen Toner, a key figure in the AI safety field, notes that the community has expanded from about 50 people in 2016 to hundreds or thousands today. The release of ChatGPT in late 2022 brought AI safety concerns to the forefront of public discourse, with experts gaining unprecedented media attention and influence. Public interest in AI safety issues has since waned, with ChatGPT becoming a routine part of digital life and initial...
read Aug 27, 2024Elon Musk Backs California AI Safety Testing Bill
Elon Musk advocates for AI regulation in California: The Tesla CEO and owner of social media platform X has expressed support for a California bill that would require tech companies and AI developers to conduct safety testing on certain AI models. Musk stated on X that he has been advocating for AI regulation for over 20 years, emphasizing the need to regulate any product or technology that poses potential risks to the public. The bill in question, SB 1047, is one of 65 AI-related bills introduced by California state lawmakers this legislative season, according to the state's legislative database. Many...
read Aug 25, 2024Tech Giants Push for Open-Source AI to Fuel Innovation
AI industry leaders advocate for open-source models: Mark Zuckerberg and Daniel Ek make a compelling case for open-sourcing AI software, particularly in Europe, to prevent power concentration and foster innovation. Zuckerberg and Ek argue that open-sourcing AI models creates a level playing field and ensures power isn't concentrated among a few large players. The approach aligns with Meta's recent shift in priorities, focusing more on AI investments rather than the "metaverse." This stance marks a notable change in perception for Zuckerberg, who has faced criticism for past decisions but is now gaining support for his AI-focused strategy. The future of...
read Aug 25, 2024Copilot Falsely Accuses Journalist Who Is Now Suing Microsoft
AI-generated defamation incident: A German journalist, Martin Bernklau, became the victim of false and defamatory statements generated by Microsoft's Copilot AI, raising concerns about the responsibility of AI companies for the content their systems produce. Bernklau, who has decades of experience reporting on criminal trials, discovered that Copilot AI had falsely accused him of various crimes, including child abuse and exploiting widows as an undertaker. The AI system mistakenly attributed crimes Bernklau had reported on to the journalist himself, conflating the reporter with the subjects of his articles. In addition to the false accusations, Copilot also disclosed personal information of...
read Aug 23, 2024OpenAI Exec Says California AI Safety Bill Might Hinder Progress
California's AI safety bill sparks debate: OpenAI's chief strategy officer Jason Kwon has voiced opposition to California's proposed SB 1047 AI safety bill, igniting a discussion on the appropriate level of AI regulation. The bill, introduced by State Senator Scott Wiener, aims to establish safety standards for powerful AI models, including requirements for pre-deployment safety testing and whistleblower protections. Kwon argues that AI regulations should be left to the federal government rather than individual states, claiming the bill could impede progress and drive companies out of California. In response, Senator Wiener contends that the bill is reasonable and would apply...
read Aug 23, 2024AI Giants Clash Over California’s Proposed Safety Bill
AI safety legislation in California sparks debate: Anthropic and OpenAI, two leading AI companies, have expressed differing views on California's proposed AI safety bill, SB 1047, highlighting the complex landscape of AI regulation. Anthropic's cautious support: Anthropic CEO Dario Amodei has communicated a measured stance on the California AI safety bill, acknowledging both its potential benefits and drawbacks. In a letter to Governor Gavin Newsom, Amodei stated that the benefits of SB 1047 "likely outweigh its costs," indicating a tentative support for the legislation. However, Amodei expressed concerns about potential government overreach, suggesting that the bill should maintain a "laser...
read Aug 21, 2024Why More Leaders Are Emphasizing ‘Prosocial AI’ to Guide Product Development
The rise of Prosocial AI: Prosocial AI, an approach to artificial intelligence development that prioritizes societal well-being and ethical considerations, is gaining traction as businesses seek to align technological advancements with human values and social goals. Unlike traditional AI systems focused primarily on efficiency and profit, Prosocial AI operates on principles of fairness, transparency, and inclusivity, aiming to promote positive social behaviors and collaboration. This approach to AI development considers the well-being of individuals, communities, society, and the planet as a whole, offering a more holistic perspective on technological progress. Real-world applications: Prosocial AI is already making an impact across...
read Aug 18, 2024AI Safety Remains Critical Beyond Market Hype and Busts
The AI hype cycle: A distraction from fundamental challenges: The current boom and potential bust in artificial intelligence companies and products are diverting attention from the critical issues surrounding AI safety and responsible development. While concerns about overblown AI hype and delayed commercial applications are growing, these short-term market fluctuations should not overshadow the long-term trajectory and implications of AI development. The core challenge remains: how to properly control and supervise increasingly powerful AI systems that could potentially be developed in the near future. Even if the next generation of AI models fails to deliver significant improvements, AI's gradual transformation...
read Aug 18, 2024AI Fraud Detection Backfires, Freezing Customer’s £12,800 Transfer
AI-driven fraud detection causes banking headache: The intersection of artificial intelligence and financial security has created unexpected challenges for both banks and their customers, as demonstrated by a recent incident involving Starling Bank and a UK academic. The incident: John MacInnes, an Edinburgh academic, faced significant obstacles when attempting to transfer £12,800 to a long-time friend in Austria, leading to a series of escalating issues with Starling Bank. MacInnes' initial attempt to send €15,000 to assist a friend with cashflow problems was blocked by Starling's fraud detection system. The bank's fraud team made what MacInnes described as "absurd demands" for...
read Aug 18, 2024California’s Controversial AI Bill Was Just Amended — Here’s What You Need to Know
California's AI regulation bill undergoes significant changes: State Senator Scott Wiener's controversial SB 1047, aimed at protecting Californians from potential AI-driven catastrophes, has been amended to address concerns from the tech industry. The bill initially proposed requiring AI companies to share safety plans with the attorney general and face penalties for catastrophic events, sparking debate among lawmakers, tech companies, and industry experts. Recent amendments have altered key aspects of the bill, including the removal of a perjury penalty and changes to the legal standard for developers regarding AI model safety. Plans for a new government entity called the Frontier Model...
read Aug 16, 2024AI Risk Assessment Lags Behind Soaring Adoption, PwC Survey Finds
Generative AI adoption is surging among organizations, but risk assessment lags behind, according to a recent PwC survey of U.S. executives. Widespread adoption of generative AI: A significant majority of organizations are embracing or planning to implement generative AI technologies, reflecting a growing trend in the business world. PwC's survey of 1,001 U.S. executives revealed that 73% of organizations are currently using or planning to use generative AI. This high adoption rate indicates a strong interest in leveraging AI capabilities to enhance business operations and drive innovation. Risk assessment gap: Despite the rapid adoption of generative AI, many organizations are...
read Aug 16, 2024California AI Bill Nears Vote With Key Changes to Safety Rules
California's AI regulation bill undergoes significant amendments, balancing innovation with safety concerns as it nears a crucial vote by the end of August. Key changes to California's AI bill: The amended S.B. 1047 introduces new restrictions on artificial intelligence while addressing industry concerns about potential overregulation. Lawmakers have revised the bill to require companies to test AI safety before public release, striking a balance between innovation and public protection. The California attorney general would gain the authority to sue companies if their AI systems cause serious harm, providing a legal recourse for potential AI-related damages. Regulatory duties have been shifted...
read Aug 15, 2024Security Vulnerabilities Found in Microsoft’s AI Healthcare Bots
Critical vulnerability discovered: Cybersecurity researchers at Tenable uncovered serious security flaws in Microsoft's Azure Health Bot Service, potentially exposing sensitive patient health information to unauthorized access. The vulnerability allowed researchers to gain access to "hundreds and hundreds of resources belonging to other customers," highlighting the severity of the security breach. The flaw was identified in the data-connection component that enables bots to interact with external data sources, where researchers found they could connect using a malicious external host and obtain leaked access tokens. Azure Health Bot Service is widely used by healthcare organizations to deploy AI-powered virtual health assistants capable...
read