×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The U.S. Department of Commerce announced new guidance and software tools from the National Institute of Standards and Technology (NIST) to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems, marking 270 days since President Biden’s Executive Order on AI.

Key NIST releases: NIST released three final guidance documents previously released in draft form for public comment in April, as well as two new products appearing for the first time:

  • A draft guidance document from the U.S. AI Safety Institute intended to help mitigate risks stemming from generative AI and dual-use foundation models
  • A software package called Dioptra designed to measure how adversarial attacks can degrade the performance of an AI system

Addressing AI risks and supporting innovation: The new guidance documents and testing platform aim to inform software creators about the unique risks posed by generative AI and help them develop mitigation strategies while supporting continued innovation in the field.

  • NIST Director Laurie E. Locascio emphasized the potentially transformational benefits of generative AI but also highlighted the significantly different risks it poses compared to traditional software.

Preventing misuse of dual-use AI foundation models: The AI Safety Institute’s draft guidelines outline voluntary best practices for foundation model developers to protect against misuse causing deliberate harm:

  • The guidance offers seven key approaches and recommendations for mitigating misuse risks and enabling transparency in their implementation.
  • Practices aim to prevent models from enabling harmful activities like biological weapons development, offensive cyber operations, and generation of abusive or nonconsensual content.
  • NIST is accepting public comments on the draft through September 9, 2024.

Additional finalized guidance documents: NIST also released final versions of three documents initially shared in draft form in April:

Broader implications: The release of these NIST guidance documents and tools represents a significant step in the U.S. government’s efforts to proactively address the risks and challenges posed by rapidly advancing AI technologies, particularly generative AI and powerful foundation models. By providing voluntary best practices, risk management frameworks, and testing capabilities, NIST aims to equip AI developers, users, and evaluators with resources to help mitigate potential harms and steer the technology in a safe, secure, and trustworthy direction.

However, the guidance is still in draft or early release form and will require ongoing public and stakeholder input to refine and put into practice effectively. The complex, fast-moving nature of AI development will necessitate agile, adaptive approaches to risk management and governance. Broad participation and collaboration across sectors and international borders will be critical to develop impactful standards and practices. As the technology continues to mature and be adopted in more domains, striking the right balance between mitigating risks and enabling beneficial innovation will be a key challenge.

Department of Commerce Announces New Guidance, Tools 270 Days Following President Biden’s Executive Order on AI

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.