back
Get SIGNAL/NOISE in your inbox daily

The U.S. Department of Commerce announced new guidance and software tools from the National Institute of Standards and Technology (NIST) to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems, marking 270 days since President Biden’s Executive Order on AI.

Key NIST releases: NIST released three final guidance documents previously released in draft form for public comment in April, as well as two new products appearing for the first time:

  • A draft guidance document from the U.S. AI Safety Institute intended to help mitigate risks stemming from generative AI and dual-use foundation models
  • A software package called Dioptra designed to measure how adversarial attacks can degrade the performance of an AI system

Addressing AI risks and supporting innovation: The new guidance documents and testing platform aim to inform software creators about the unique risks posed by generative AI and help them develop mitigation strategies while supporting continued innovation in the field.

  • NIST Director Laurie E. Locascio emphasized the potentially transformational benefits of generative AI but also highlighted the significantly different risks it poses compared to traditional software.

Preventing misuse of dual-use AI foundation models: The AI Safety Institute’s draft guidelines outline voluntary best practices for foundation model developers to protect against misuse causing deliberate harm:

  • The guidance offers seven key approaches and recommendations for mitigating misuse risks and enabling transparency in their implementation.
  • Practices aim to prevent models from enabling harmful activities like biological weapons development, offensive cyber operations, and generation of abusive or nonconsensual content.
  • NIST is accepting public comments on the draft through September 9, 2024.

Additional finalized guidance documents: NIST also released final versions of three documents initially shared in draft form in April:

Broader implications: The release of these NIST guidance documents and tools represents a significant step in the U.S. government’s efforts to proactively address the risks and challenges posed by rapidly advancing AI technologies, particularly generative AI and powerful foundation models. By providing voluntary best practices, risk management frameworks, and testing capabilities, NIST aims to equip AI developers, users, and evaluators with resources to help mitigate potential harms and steer the technology in a safe, secure, and trustworthy direction.

However, the guidance is still in draft or early release form and will require ongoing public and stakeholder input to refine and put into practice effectively. The complex, fast-moving nature of AI development will necessitate agile, adaptive approaches to risk management and governance. Broad participation and collaboration across sectors and international borders will be critical to develop impactful standards and practices. As the technology continues to mature and be adopted in more domains, striking the right balance between mitigating risks and enabling beneficial innovation will be a key challenge.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...