News/AI Safety

Sep 30, 2024

AI is fueling an alarming surge in child exploitation cases

AI-generated child exploitation material: A growing crisis: The widespread availability of generative AI technology is fueling a surge in the creation and distribution of sexually explicit images and videos of children, posing significant challenges for law enforcement, schools, and society at large. Scale of the problem: The issue of AI-generated child sexual abuse material (CSAM) is rapidly expanding, affecting a substantial number of minors across the United States and globally. A report by the Center for Democracy and Technology found that 15% of high school students had heard of AI-generated sexually explicit images depicting someone associated with their school. A...

read
Sep 30, 2024

California governor vetoes major AI regulation bill

California's AI bill veto: A setback for regulation efforts: Governor Gavin Newsom of California has vetoed S.B. 1047, a groundbreaking artificial intelligence safety bill that would have implemented strict regulations on the technology. The bill, which passed both houses of the California Legislature nearly unanimously, aimed to establish safety testing requirements for large AI systems before their public release. It would have granted the state's attorney general the authority to sue companies for serious harm caused by their AI technologies, including death or property damage. A mandatory kill switch for AI systems was included in the bill to address potential...

read
Sep 27, 2024

Panda vs Eagle: existential risk and the need for US-China AI cooperation

A critical perspective on the US-China AI race: The recent resharing of Leopold Aschenbrenner's essay by Ivanka Trump has reignited discussions about artificial general intelligence (AGI) development and its geopolitical implications, particularly focusing on the potential race between the United States and China. The argument for an AI arms race: Aschenbrenner's essay suggests that AGI will be developed soon and advocates for the U.S. to accelerate its efforts to outpace China in this domain. The essay argues that AGI could be a game-changing technology, potentially offering a decisive military advantage comparable to nuclear weapons. Aschenbrenner frames the stakes in stark...

read
Sep 27, 2024

California governor faces deadline on crucial AI safety bill

California's AI safety bill nears decision point: Governor Gavin Newsom faces a critical deadline to sign or veto SB 1047, a controversial piece of legislation aimed at regulating artificial intelligence in the state. The ticking clock: With the September 30th deadline looming, Newsom must weigh the arguments from both supporters and critics of the bill, which has sparked intense debate within the tech industry and beyond. The bill, known as SB 1047, has been the subject of intense scrutiny and discussion since its introduction. Governor Newsom's decision will have far-reaching implications for the future of AI development and regulation in...

read
Sep 27, 2024

100+ companies have pledged compliance with the EU AI Act

Major tech players commit to EU AI regulations ahead of schedule: Over 100 companies, including industry giants like Google, Microsoft, Adobe, and Samsung, have pledged early compliance with the European Union's Artificial Intelligence Act, signaling a proactive approach to AI governance. The AI Act officially became law on August 1st, 2023, marking a significant milestone in regulating artificial intelligence technologies within the European Union. While some provisions of the Act, particularly those concerning "high risk" AI systems, are not set to be enforced until August 2027, many companies are voluntarily accelerating their compliance efforts. Notable signatories to the early compliance...

read
Sep 25, 2024

HP Finds Malware Attack Likely Built With Generative AI

AI-assisted malware attack targets French users: HP's Wolf Security researchers have uncovered a malicious email campaign likely developed with the help of generative AI, raising concerns about the evolving landscape of cybersecurity threats. In June, HP's anti-phishing system, Sure Click, flagged an unusual email attachment targeting French language users. The attachment contained an HTML file that, when accessed with the correct password, revealed a ZIP archive containing AsyncRAT malware. AsyncRAT is an open-source remote access tool that can be misused to control victims' computers remotely. Unusual code characteristics raise suspicions: The malicious code found in the email attachment exhibited atypical...

read
Sep 25, 2024

Microsoft Unveils New AI Tool to Detect and Correct AI Errors

Microsoft introduces AI inaccuracy correction tool: Microsoft has launched a new feature called "correction" as part of its Azure AI Studio, designed to automatically detect and rewrite incorrect content in AI outputs. Key features of the correction tool: The system scans and identifies inaccuracies by comparing AI output with a customer's source material It highlights mistakes, provides information about why they're incorrect, and rewrites the content The process occurs before the user sees the inaccuracy, aiming to prevent the spread of misinformation How it works: The correction feature uses small and large language models to align outputs with grounding documents...

read
Sep 24, 2024

Entertainment Leaders Pen Letter Urging Newsom to Sign AI Safety Bill

Hollywood rallies for AI safety legislation: More than 125 entertainment industry leaders have signed a letter urging California Governor Gavin Newsom to sign a bill requiring advanced AI developers to implement safety measures. The bill, SB 1047, introduced by Senator Scott Wiener, would mandate that AI developers share safety plans with the state's attorney general and have mechanisms to shut down AI models if they pose a threat to public safety. Signatories include prominent figures such as J.J. Abrams, Shonda Rhimes, Judd Apatow, Ava DuVernay, Mark Hamill, Jane Fonda, and SAG-AFTRA leaders Fran Drescher and Duncan Crabtree-Ireland. The letter emphasizes...

read
Sep 23, 2024

AI-Driven Internet Freedom Alliance Grows to 41 Nations

Freedom Online Coalition's mission and growth: The Freedom Online Coalition, a group of democratic nations committed to safeguarding internet freedom, has expanded its membership and influence in recent years. The coalition has grown from 32 to 41 members since 2021, with recent additions including Cabo Verde, Slovenia, Colombia, and Taiwan as an observer. This growth reflects an increasing recognition among democracies of the importance of protecting digital rights and freedoms. The United States has played a key role in reinvigorating the coalition, as part of its broader commitment to defending democracy globally. Key initiatives and achievements: The coalition has undertaken...

read
Sep 23, 2024

Stanford HAI’s New Policy Fellow to Study AI’s Implications for Safety and Privacy

AI governance and civil liberties: Riana Pfefferkorn, a new policy fellow at the Stanford Institute for Human-Centered AI, is studying how AI governance can protect people's rights while mitigating harmful uses of the technology. Pfefferkorn's research covers a range of topics, including government approaches to encryption and digital surveillance, generative AI and online safety, and court evidence and trust. Her background blends legal expertise with a commitment to public interest, having advised startups, represented major tech companies, and clerked for a federal judge. At Stanford HAI, she will continue to bring law and policy analysis to social issues raised by...

read
Sep 23, 2024

UN Issues ‘Governing AI for Humanity’ Report — Here’s What It Says

The UN's call for a paradigm shift in AI governance: The United Nations High-level Advisory Body on Artificial Intelligence has released a report titled "Governing AI for Humanity," advocating for a fundamental change in how AI is governed globally, with a focus on ethics, human rights, and global equity. The report, released in September 2024, emphasizes the need for AI governance to prioritize societal well-being over profit. It highlights the risks associated with the rapid expansion of AI, including the potential to exacerbate social inequalities and amplify disinformation. The report's recommendations are aimed at addressing the fragmentation of current AI...

read
Sep 23, 2024

Unintended Consequences of AI Democratization: Anyone Can Be a Hacker

The rising threat of AI-powered cybercrime: Generative AI is lowering the barrier to entry for cybercriminals, enabling individuals with limited technical skills to engage in sophisticated hacking activities. The democratization of AI technology has made powerful hacking tools accessible to novices, potentially leading to increased cyber threats targeting various systems, from personal devices to critical infrastructure. AI-driven hacking tools available on the darknet can generate phishing content, malware, and other malicious software, posing significant risks to individuals and organizations alike. The proliferation of Internet-connected devices, including everyday items and essential systems like the electric grid, expands the potential attack surface...

read
Sep 23, 2024

UN Publishes ‘Pact of the Future’ to Tackle AI and Global Challenges

Global cooperation takes center stage: The United Nations General Assembly has approved a comprehensive "Pact of the Future" aimed at addressing pressing global challenges and fostering international cooperation. The 42-page document was endorsed at the opening of the two-day "Summit of the Future," an initiative spearheaded by U.N. Secretary-General Antonio Guterres. The pact seeks to unite nations worldwide in tackling 21st-century issues such as climate change, artificial intelligence, ongoing conflicts, inequality, and poverty. Guterres' call to action: The U.N. Secretary-General challenged world leaders to move beyond rhetoric and implement the pact's ambitious goals. Guterres emphasized the need for prioritizing dialogue,...

read
Sep 23, 2024

California AI Bill Faces Crucial Decision as Governor Weighs Options

California's AI legislation at a crossroads: Governor Newsom faces a critical decision on Senate Bill 1047, also known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," which aims to establish comprehensive AI governance constraints. The bill, currently awaiting action on Governor Newsom's desk, must be addressed by September 30, 2024, with options to sign, veto, or allow it to become law through a pocket signature. SB 1047 has garnered significant attention nationwide due to its potential to be the first of its kind in AI legislation. Key arguments for and against SB 1047: The proposed legislation...

read
Sep 22, 2024

What It Really Means for Advanced AI Models to ‘Reason’

AI reasoning breakthrough: OpenAI's latest large language model, o1 (nicknamed Strawberry), represents a significant advancement in artificial intelligence capabilities, particularly in its ability to reason and "think" before providing answers. O1 is the first major LLM to incorporate a built-in "think, then answer" approach, moving beyond the limitations of previous models that often produced contradictory or inconsistent responses. This new model demonstrates markedly improved performance on challenging tasks across various fields, including physics, chemistry, biology, mathematics, and coding. The enhanced reasoning ability of o1 is achieved through a technique similar to chain-of-thought prompting, which encourages the model to show its...

read
Sep 19, 2024

Opinion: Making SB 1047 a Law is the Best Way to Improve It

The legislative landscape: California Senate Bill 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish groundbreaking regulations for AI technology developers in the state. The bill requires AI companies to integrate safeguards into their "covered models" during development and deployment phases. It empowers the California attorney general to pursue civil actions against parties failing to take "reasonable care" in preventing catastrophic harms or enabling emergency shutdowns. SB 1047 represents a significant step towards regulating the rapidly evolving AI industry, potentially setting a precedent for other states and countries. Industry opposition and concerns:...

read
Sep 18, 2024

Global AI Safety Summit Set for San Francisco in November 2024

International collaboration on AI safety: The U.S. Department of Commerce and State Department announced the inaugural convening of the International Network of AI Safety Institutes, set to take place in San Francisco on November 20-21, 2024. The event aims to bring together technical experts on artificial intelligence from member countries' AI safety institutes or equivalent government-backed scientific offices. The primary goal is to align priority work areas for the Network and advance global collaboration and knowledge sharing on AI safety. This convening follows Secretary of Commerce Gina Raimondo's announcement of the Network's launch during the AI Seoul Summit in May....

read
Sep 18, 2024

AI Voice Scams Are Surging — Here’s How to Protect Yourself

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money. The UK-based online bank reports that scammers can clone a person's voice from just three seconds of audio found online, such as in social media videos. Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses. Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning...

read
Sep 18, 2024

How to Train AI Chatbots Responsibly

The rise of AI chatbots and the need for responsible development: Generative artificial intelligence has emerged as a powerful tool with significant potential, but recent incidents have highlighted the importance of responsible AI practices in chatbot development. The legal and reputational consequences of AI mishaps, such as lawyers submitting fabricated documents and Air Canada's chatbot providing false information, have raised concerns about the technology's reliability. A 2023 Gallup/Bentley University survey revealed that only 21% of consumers trust businesses to handle AI responsibly, underscoring the need for improved practices. Instilling good manners in AI chatbots: Transparency and respect for user rights...

read
Sep 17, 2024

GSMA Announces ‘Responsible AI’ Roadmap for Telecoms Industry

Telecoms industry embraces responsible AI: The GSMA has launched a Responsible AI (RAI) Maturity Roadmap for the telecoms industry, aiming to promote ethical AI adoption while tapping into a $680 billion market opportunity. Key initiative details: Developed in partnership with McKinsey and supported by 19 major mobile network operators (MNOs) Represents the first unified, responsible AI framework for an entire industry Aligns with global AI regulations and ethical standards set by organizations like the OECD and UNESCO Market potential and industry commitment: McKinsey estimates AI in telecoms could generate $680 billion over the next 15-20 years The roadmap enables telecom...

read
Sep 17, 2024

AI Critic Gary Marcus Warns of Silicon Valley’s Moral Decline

Generative AI's rapid rise has sparked concerns about its societal impact and the ethical implications of Silicon Valley's push for artificial general intelligence (AGI). The big picture: Gary Marcus, NYU professor emeritus and AI critic, argues that Silicon Valley's moral decline and focus on short-term gains have led to the development of flawed generative AI systems with potentially dire consequences. Marcus's new book, "Taming Silicon Valley: How We Can Ensure That AI Works for Us," highlights the immediate threats posed by current generative AI technology, including political disinformation, market manipulation, and cybersecurity risks. The author traces this shift in Silicon...

read
Sep 16, 2024

The Benefits Generative AI Can Bring to Fraud Management and AML

Generative AI revolutionizes fraud management and anti-money laundering: The integration of generative AI (genAI) into fraud management and anti-money laundering (FRAML) initiatives is transforming the landscape of financial security, offering both new challenges and powerful solutions. Countering advanced fraud techniques: As fraudsters leverage genAI to create sophisticated fake IDs and deepfakes, financial institutions are compelled to adopt equally advanced defensive measures. Deepfake detection technologies powered by genAI are becoming increasingly sophisticated, employing methods such as spectral video analysis and behavioral biometrics. These advanced detection techniques are crucial in identifying and preventing fraud attempts that use AI-generated content to deceive traditional...

read
Sep 16, 2024

Larry Ellison Thinks Omnipresent AI Surveillance Will Ensure Good Behavior

Oracle co-founder envisions AI-powered surveillance future: Larry Ellison, during a company financial meeting, outlined a world where artificial intelligence systems would constantly monitor citizens through cameras and drones to ensure compliance with laws. The surveillance ecosystem: Ellison described a comprehensive network of AI-powered monitoring devices that would permeate everyday life, creating an environment of constant observation. AI models would analyze footage from various sources, including security cameras, police body cams, doorbell cameras, and vehicle dash cams. AI-controlled drones would replace police vehicles in high-speed pursuits, potentially reducing risks associated with traditional car chases. This extensive surveillance network aims to promote...

read
Sep 16, 2024

Stephen Fry’s Latest Take on How to Live Well In the AI Era

The AI revolution and its implications: Stephen Fry, renowned author and technology commentator, offers a compelling perspective on the rapid advancement of artificial intelligence and its potential to reshape society fundamentally. Fry characterizes AI as part of a larger technological convergence, including quantum computing, genomics, and robotics, that he likens to a "tsunami" poised to dramatically alter our world. The author draws parallels between the current AI revolution and past technological shifts, highlighting humanity's historical difficulty in accurately predicting the societal impacts of new innovations. Technological progress and unforeseen consequences: The rapid evolution of AI capabilities has been driven primarily...

read
Load More