News/Governance
The ethics debate we should be having about AI agents
The rapid advancement of AI technology is moving beyond generative models towards AI agents that can both perform tasks and simulate human behavior, raising new ethical considerations about digital identity and interaction. Current state of AI agents; Two distinct categories of AI agents are emerging in the technological landscape, each with unique capabilities and applications. Tool-based agents, introduced by companies like Anthropic and Salesforce, can understand natural language commands to perform digital tasks such as form-filling and web navigation OpenAI is reportedly preparing to launch its own tool-based agent in January 2025 Simulation agents, initially developed for social science research,...
read Nov 28, 2024G2 announces new software category for AI governance tools
The rapid adoption of artificial intelligence across industries has created an urgent need for specialized tools to ensure responsible and compliant AI implementation. Category introduction and significance: G2 has launched a new AI Governance Tools category to help organizations effectively manage and monitor their AI systems. These tools are becoming increasingly critical as companies rush to adopt AI technologies while facing growing regulatory scrutiny The category encompasses various solutions including compliance management and bias detection tools Organizations can use these tools to optimize AI systems while maintaining regulatory compliance Core functionalities and features: AI governance tools provide comprehensive oversight of...
read Nov 28, 2024What does AI hold for the future? Just follow this map
The development of artificial intelligence and its potential impact on humanity's future can be explored through a new interactive flowchart tool that allows users to visualize different AI development scenarios and their probabilities. Project Overview: The "Map of AI Futures" is an interactive flowchart tool designed to help users explore various scenarios regarding how artificial intelligence might develop and impact humanity. The tool uses a system of nodes and conditional probabilities to map out potential AI development paths and outcomes Users can adjust probability sliders to see how different assumptions affect the likelihood of various scenarios Outcomes are categorized into...
read Nov 27, 2024US Edu department issues new AI implementation guidance
The U.S. Department of Education has issued comprehensive guidance to help educational institutions avoid discriminatory uses of artificial intelligence, marking a significant step in protecting student civil rights in an increasingly AI-enabled education system. Key policy announcement: The Office for Civil Rights (OCR) released a 16-page document detailing scenarios where AI implementation in schools could violate federal civil rights laws. The guidance provides detailed examples across three major civil rights laws: Title VI, Title IX, and Section 504/Title II of the ADA The document aims to help schools understand and prevent discriminatory AI practices before they occur Catherine Lhamon, assistant...
read Nov 27, 2024Call for evaluators: EU seeks experts for AI risk assessment workshop
The European AI Office is organizing a specialized workshop to advance the evaluation of general-purpose AI models, particularly focusing on systemic risks under the EU AI Act framework. Event Overview: The European AI Office will host an online workshop on December 13, 2024, bringing together experts to discuss evaluation methodologies for general-purpose AI models and their associated systemic risks. The workshop aims to gather insights from leading evaluators and contribute to developing robust evaluation frameworks Selected participants will have the opportunity to present their approaches and share best practices The event is exclusively designed for specialists in AI evaluation Key...
read Nov 26, 2024Register now: Dec. ‘Ideathon’ to discuss assignment of legal responsibility in AI era
The AI & Liability Ideathon brings together legal experts, researchers, and developers for a collaborative two-week event aimed at developing innovative solutions for artificial intelligence liability challenges. Event Overview: The Ideathon, scheduled for December 7, 2024, will culminate in a presentation evening where teams share their proposals for addressing AI liability issues. The event will primarily take place on the AI-Plans Discord platform Final proposals will be published on AI-Plans, with the top three selected through peer review The presentation evening is open to all, including non-participants Participant Diversity: The event welcomes a broad range of professionals and stakeholders to...
read Nov 26, 2024What AI bias is and how to prevent it
The critical challenge: Artificial Intelligence (AI) bias represents a growing concern for organizations as they seek to develop fair and effective AI systems while avoiding the perpetuation of existing societal prejudices at scale. Core context and implications: AI bias occurs when machine learning systems produce unfair or discriminatory outcomes, often reflecting historical biases present in training data or system design. AI governance, which involves directing and monitoring an organization's AI activities, plays a crucial role in identifying and addressing potential biases. While AI has the potential to help identify and reduce human biases, it can paradoxically amplify these biases by...
read Nov 26, 2024Right to Repair: The growing movement demanding more transparency from AI models
The growing prevalence of artificial intelligence systems has sparked a public backlash, leading to calls for greater transparency and control over how AI technologies interact with personal data and daily life. Current landscape: Public sentiment toward artificial intelligence has shifted significantly toward skepticism and concern, particularly regarding unauthorized use of personal data. The New York Times initiated legal action against OpenAI and Microsoft over copyright infringement in December 2023 Nvidia faces a class action lawsuit from authors concerning alleged unauthorized use of copyrighted materials for AI training Actress Scarlett Johansson confronted OpenAI over the similarity between their ChatGPT voice model...
read Nov 26, 2024Arizona adapts new AI policy framework
The state of Arizona continues to refine and expand its generative artificial intelligence (GenAI) policies as it embraces AI technology across various government functions. Policy evolution and oversight: Arizona has made significant updates to its statewide GenAI policy framework, initially established in March 2023, to address the rapidly changing technological landscape. The state's revised policies focus on three key areas: the role of the State Data and Analytics Office, enhanced data governance, and detailed guidelines for agency and employee responsibilities A newly formed AI Steering Committee, announced by Gov. Katie Hobbs, will guide future policy development and identify potential AI...
read Nov 24, 2024SB-1047, ChatGPT and the future of AI regulation
The landscape of artificial intelligence regulation and safety has become increasingly complex as governments grapple with oversight of rapidly advancing AI technologies. Legislative milestone: California's ambitious AI safety bill SB-1047, introduced in February 2024, marked a significant attempt to establish comprehensive oversight of advanced artificial intelligence systems. The bill sought to regulate "frontier AI models" based on specific thresholds of computing power and development costs After 11 rounds of amendments, the legislation reached Governor Newsom's desk before ultimately being vetoed The proposed regulations would have created new requirements for companies developing large AI models Industry divide: The bill exposed deep...
read Nov 22, 2024How to balance bold, responsible and successful AI deployment
The growing adoption of generative AI (GenAI) among major organizations presents both significant opportunities and complex challenges, as leaders seek to balance rapid implementation with responsible deployment. Current landscape and adoption trends: Recent KPMG survey data reveals strong momentum in enterprise AI implementation across major organizations. 71% of surveyed leaders are incorporating GenAI data into decision-making processes 52% report that AI technology is influencing their competitive positioning 47% are leveraging AI to identify new revenue opportunities 54% of executives anticipate GenAI supporting new business models Key challenges and concerns: Organizations must navigate significant hurdles while implementing AI technologies. Workforce impact...
read Nov 21, 2024IEEE unveils new standard to assess AI system trustworthiness
The IEEE Standards Association has unveiled a new unified specification for evaluating and certifying AI systems' trustworthiness, marking a significant advancement in global AI governance standards. Key framework development: The Joint Specification V1.0 represents a collaborative effort between IEEE, Positive AI, IRT SystemX, and VDE to create a comprehensive assessment system for artificial intelligence. The specification combines elements from IEEE CertifAIEd™, VDE VDESPEC 90012, and the Positive AI framework This unified approach aims to streamline AI evaluation processes worldwide while promoting innovation and competitiveness The framework is designed to align with the 2024 EU AI Act requirements and ethical guidelines...
read Nov 21, 2024In Trump’s shadow: Nations convene in SF to tackle global AI safety
International cooperation on artificial intelligence safety and oversight took center stage at a significant gathering in San Francisco, marking a crucial step toward establishing global standards for AI development and deployment. Key summit details; The Network of AI Safety Institutes, comprising 10 nations, convened at San Francisco's Presidio to forge common ground on AI testing and regulatory frameworks. Representatives from Australia, Canada, the EU, France, Japan, Kenya, Singapore, South Korea, and the UK participated in the discussions U.S. Commerce Secretary Gina Raimondo delivered the keynote address, emphasizing American leadership in AI safety while acknowledging both opportunities and risks The consortium...
read Nov 20, 2024AI scholar Gary Marcus calls for new regulatory agency to oversee AI
Recent developments in artificial intelligence have prompted growing calls for regulatory oversight, with AI researcher Gary Marcus making a compelling case for dedicated government supervision in his new book "Taming Silicon Valley." Current AI risks and challenges: Marcus identifies twelve pressing dangers associated with current AI technologies, particularly focusing on generative AI systems like ChatGPT. Mass-produced misinformation and deepfake scams represent immediate threats to public discourse and security Intellectual property theft and privacy violations pose significant risks to individuals and businesses Silicon Valley's practices often involve misleading the public about both AI capabilities and associated risks Proposed regulatory framework: A...
read Nov 20, 2024HBR: AI risk management needs collective team wisdom
The rise of generative AI presents organizations with both unprecedented opportunities and significant challenges in implementing this transformative technology safely and effectively. Current risk management landscape: Most organizations have implemented basic risk mitigation strategies for generative AI through policies and critical thinking protocols. Companies typically rely on formal usage guidelines and individual assessment of AI outputs These traditional approaches, while necessary, may not be sufficient given the complex and evolving nature of AI technology Organizations need more robust frameworks to address AI-related challenges including accuracy issues, hallucinations, and inherited biases The team-based judgment framework: A third layer of risk management...
read Nov 18, 2024For AI safety to be effective we need a much more proactive framework
The future of AI safety and governance hinges on developing proactive detection and response mechanisms, with particular focus on emerging risks like bioweapons, recursive self-improvement, and autonomous replication. Reactive vs. proactive approaches: Traditional reactive if-then planning for AI safety waits for concrete evidence of harm before implementing protective measures, which could prove dangerously inadequate for managing catastrophic risks. Reactive triggers typically respond to demonstrable harm, such as AI-assisted bioweapons causing damage or unauthorized AI systems causing significant real-world problems While reactive approaches are easier to justify to stakeholders, they may allow catastrophic damage to occur before protective measures are implemented...
read Nov 18, 2024How the Copyright Clearance Center thinks about responsible AI
The Copyright Clearance Center (CCC) is pioneering new approaches to manage copyrighted content usage in artificial intelligence systems, addressing a critical challenge at the intersection of publishing and AI technology. Key initiative: CCC has developed the industry's first collective licensing solution for AI-related content use, following 18 months of consultation with stakeholders. The solution modifies CCC's existing annual blanket license to incorporate AI reuse rights for internal purposes The new licensing framework launched in mid-July 2024, representing a significant step forward in balancing AI innovation with copyright protection The license operates globally, particularly benefiting multinational companies requiring worldwide content access...
read Nov 18, 2024More powerful AI models require better AI safety benchmarks
The advancement of artificial intelligence capabilities has created an urgent need to evaluate and benchmark AI safety measures to protect society from potential risks. Core assessment framework: The Centre pour la Sécurité de l'IA (CeSIA) has developed a systematic approach to evaluate AI safety benchmarks based on risk probability and severity. The framework multiplies the probability of risk occurrence by estimated severity to calculate expected impact Current benchmarking methods are rated on a 0-10 scale to determine their effectiveness in identifying risky AI systems This analysis helps prioritize which safety benchmarks would provide the greatest benefit to humanity Priority risk...
read Nov 17, 2024Experts react to DHS guidelines for secure AI in critical infrastructure
The U.S. Department of Homeland Security has introduced a new framework to safeguard artificial intelligence applications within critical infrastructure systems, marking a significant step in federal oversight of AI technology deployment. Framework overview: The Department of Homeland Security's initiative represents a collaborative effort to establish guidelines for secure AI implementation in critical infrastructure sectors. The framework emerged from extensive consultation with diverse stakeholders, including cloud service providers, AI developers, infrastructure operators, and civil society organizations Secretary Mayorkas established an Artificial Intelligence Safety and Security Board to guide the development of these protective measures The guidelines aim to create standardized practices for...
read Nov 16, 2024Trump revoking Biden’s AI executive order will cause chaos, experts predict
The incoming Trump administration's expected repeal of President Biden's AI executive order in 2025 could significantly impact both AI development and enterprise adoption of artificial intelligence technologies. The current landscape: Biden's executive order established government oversight offices and encouraged AI model developers to implement safety standards, creating a framework for responsible AI development. The order focused primarily on model developers while also affecting enterprise AI adoption and implementation Companies aligned with Trump, such as Elon Musk's xAI, may benefit from decreased regulation Enterprises could face challenges including fragmented regulations and reduced data transparency Regulatory fragmentation concerns: Without federal oversight, states...
read Nov 14, 2024DHS releases AI adoption guidelines for critical infrastructure
AI integration in critical U.S. infrastructure is receiving new federal guidance as the Department of Homeland Security releases a comprehensive framework for balancing innovation with security across essential sectors. Framework overview: The Department of Homeland Security has introduced the "Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure" to guide the safe implementation of AI across vital sectors including energy, water, and telecommunications. The framework addresses three core risk areas: AI-driven attacks, targeted attacks on AI systems, and design flaws DHS Secretary Alejandro N. Mayorkas developed the framework in collaboration with the new AI Safety and Security Board The...
read Nov 14, 2024AI governance market to grow 30% annually, Forrester report says
The rapid growth and adoption of artificial intelligence across industries is driving substantial investment in governance solutions to ensure responsible AI deployment and regulatory compliance. Market trajectory and scope: The global commercial AI governance software market is expected to grow significantly through 2030, with projected spending reaching $15.8 billion and representing 7% of total AI software expenditure. Market analysts predict a compound annual growth rate (CAGR) of 30% from 2024 to 2030 The expansion reflects growing organizational needs to maintain AI system integrity while meeting regulatory requirements Current governance solutions encompass multiple areas including model oversight, data management, privacy protection,...
read Nov 13, 2024ServiceNow launches AI governance tools for faster enterprise deployment
The rapid adoption of enterprise AI has created an urgent need for robust governance frameworks that can help organizations safely scale their AI implementations while maintaining compliance and reliability. Governance gap addressed: ServiceNow has unveiled new enterprise AI governance capabilities for its Now platform, designed to bridge the critical gap between experimental AI deployments and full-scale production environments. The new Now Assist Guardian provides an additional layer of protection against AI hallucination by analyzing and validating AI-generated outputs A specialized Now Assist Data Kit helps organizations manage and control their AI data infrastructure The Now Assist Analytics tool enables comprehensive...
read Nov 12, 2024How to put AI to use for a sustainable and ethical future
The rapid adoption of artificial intelligence across organizations presents both opportunities for societal advancement and potential risks that require careful consideration and management. Current state of AI adoption: AI technology offers promising capabilities to create value for society while supporting inclusivity and accessibility needs, but organizations must carefully balance benefits against potential drawbacks. Many organizations are rapidly implementing AI solutions without fully considering all implications AI has demonstrated particular value in supporting accessibility requirements and promoting equitable outcomes The technology's deployment requires thoughtful consideration of both positive and negative impacts Key challenges and concerns: The implementation of AI systems has...
read