×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The U.S. Department of Commerce announced new guidance and software tools from the National Institute of Standards and Technology (NIST) to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems, marking 270 days since President Biden’s Executive Order on AI.

Key NIST releases: NIST released three final guidance documents previously released in draft form for public comment in April, as well as two new products appearing for the first time:

  • A draft guidance document from the U.S. AI Safety Institute intended to help mitigate risks stemming from generative AI and dual-use foundation models
  • A software package called Dioptra designed to measure how adversarial attacks can degrade the performance of an AI system

Addressing AI risks and supporting innovation: The new guidance documents and testing platform aim to inform software creators about the unique risks posed by generative AI and help them develop mitigation strategies while supporting continued innovation in the field.

  • NIST Director Laurie E. Locascio emphasized the potentially transformational benefits of generative AI but also highlighted the significantly different risks it poses compared to traditional software.

Preventing misuse of dual-use AI foundation models: The AI Safety Institute’s draft guidelines outline voluntary best practices for foundation model developers to protect against misuse causing deliberate harm:

  • The guidance offers seven key approaches and recommendations for mitigating misuse risks and enabling transparency in their implementation.
  • Practices aim to prevent models from enabling harmful activities like biological weapons development, offensive cyber operations, and generation of abusive or nonconsensual content.
  • NIST is accepting public comments on the draft through September 9, 2024.

Additional finalized guidance documents: NIST also released final versions of three documents initially shared in draft form in April:

Broader implications: The release of these NIST guidance documents and tools represents a significant step in the U.S. government’s efforts to proactively address the risks and challenges posed by rapidly advancing AI technologies, particularly generative AI and powerful foundation models. By providing voluntary best practices, risk management frameworks, and testing capabilities, NIST aims to equip AI developers, users, and evaluators with resources to help mitigate potential harms and steer the technology in a safe, secure, and trustworthy direction.

However, the guidance is still in draft or early release form and will require ongoing public and stakeholder input to refine and put into practice effectively. The complex, fast-moving nature of AI development will necessitate agile, adaptive approaches to risk management and governance. Broad participation and collaboration across sectors and international borders will be critical to develop impactful standards and practices. As the technology continues to mature and be adopted in more domains, striking the right balance between mitigating risks and enabling beneficial innovation will be a key challenge.

Department of Commerce Announces New Guidance, Tools 270 Days Following President Biden’s Executive Order on AI

Recent News

iPhone 15 Plus vs 16 Plus: Which Offers Better Value?

A consumer weighs the benefits of the iPhone 15 Plus against waiting for the iPhone 16 Plus, considering screen size, performance, and long-term value.

71% of Investment Bankers Now Use ChatGPT, Survey Finds

Investment banks are increasingly adopting AI, with smaller firms leading the way and larger institutions seeing higher potential value per employee.

Scientists are Designing “Humanity’s Last Exam” to Assess Powerful AI

The unprecedented test aims to assess AI capabilities across diverse fields, from rocketry to philosophy, with experts submitting challenging questions beyond current benchmarks.