×
NTIA Recommends Monitoring AI Risks While Supporting Open-Weight Models for Innovation
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The National Telecommunications and Information Administration (NTIA) has released a report supporting the widespread availability of powerful AI models, known as open-weight models, to promote innovation and accessibility. However, the report also calls for active monitoring of potential risks and outlines steps for collecting evidence, evaluating it, and taking action if necessary.

Key recommendations: The report recommends that the U.S. government refrain from restricting the availability of open model weights for currently available systems while actively monitoring for potential risks:

  • The government should develop an ongoing program to collect evidence of risks and benefits, evaluate that evidence, and act on those evaluations, including possible restrictions on model weight availability, if warranted.
  • Specific activities include performing research into the safety of powerful AI models and their downstream uses, supporting external research, and maintaining risk-specific indicators.

Evidence evaluation and action: The report outlines steps for evaluating evidence and acting upon it if necessary:

  • The government should develop and maintain thresholds of risk-specific indicators to signal a potential change in policy, reassess benchmarks and definitions for monitoring and action, and maintain professional capabilities in various domains to support evidence evaluation.
  • If a future evaluation of evidence determines that further action is needed, the government could place restrictions on access to models or engage in other risk mitigation measures as deemed appropriate.

Balancing innovation and risk mitigation: The NTIA’s recommendations aim to strike a balance between promoting innovation and access to AI technology while positioning the U.S. government to quickly respond to risks that may arise from future models:

  • Open-weight models allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals.
  • The report’s recommendations are in line with President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which directed NTIA to review the risks and benefits of large AI models with widely available weights and develop policy recommendations to maximize those benefits while mitigating the risks.

Looking ahead: As AI technology continues to advance, the NTIA’s report provides a framework for the U.S. government to actively monitor and respond to potential risks associated with powerful AI models while still encouraging innovation and accessibility:

  • The ongoing monitoring program and evidence-based approach outlined in the report will allow the government to adapt its policies as the AI landscape evolves and new risks or benefits emerge.
  • By recommending a proactive, yet measured approach to AI governance, the NTIA aims to position the U.S. as a leader in responsible AI development and deployment, balancing the need for innovation with the importance of mitigating potential risks.
FACT SHEET: NTIA AI Report Calls for Monitoring, But Not Mandating Restrictions of Open AI Models | National Telecommunications and Information Administration

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.