RSI Security

How San José Is Using the NIST AI RMF to Build Trustworthy AI

NIST AI Risk Management Framework

As artificial intelligence (AI) becomes increasingly embedded in government operations, cities across the U.S. face a critical challenge: ensuring these systems remain fair, safe, transparent, and trustworthy. The City of San José, California, one of the country’s leading technology hubs, has emerged as an early model for responsible public-sector AI. San José is one of the first municipalities to formally evaluate its AI programs using the NIST AI Risk Management Framework (AI RMF). Through a collaboration with the National Institute of Standards and Technology, the city applied the AI RMF to assess its AI governance maturity, identify risks, and strengthen safeguards across all AI-related activities.

This NIST AI RMF case study reveals not only what San José is doing well, but also where public-sector organizations must continue improving to deploy trustworthy, risk-aware AI systems.

 

Why San José Turned to the NIST AI RMF

San José’s Office of Civic Innovation and Technology manages an expanding portfolio of AI-powered tools across multiple city departments. These systems support diverse applications, from predictive analytics for public safety to workflow automation and constituent service platforms provided by third-party vendors. As AI became increasingly embedded in day-to-day operations, city leaders recognized the risks of deploying AI without a unified, organization-wide governance framework.

To promote responsible AI use and maintain public trust, the city adopted the NIST AI Risk Management Framework (AI RMF), a leading national standard designed to help organizations develop, deploy, and oversee trustworthy AI systems. By applying the AI RMF’s four core functions, Map, Measure, Manage, and Govern, San José systematically assessed where AI was in use, how risks were being managed, and which safeguards ensured accountability and fairness.

 

San José’s Self-Assessment Using the NIST AI Risk Management Framework

To evaluate its AI programs, San José assembled a cross-departmental working group and conducted a comprehensive self-assessment using the NIST AI Risk Management Framework (AI RMF). The assessment rated maturity across 72 subcategories, spanning technical, organizational, and governance dimensions.

Key AI RMF Maturity Scores (out of 4.0):

The city’s highest score was in “Map,” reflecting a strong awareness of where and how AI systems are being used across departments. The lowest score, “Measure,” highlighted gaps in formal risk-tracking processes, largely because most AI systems in San José are procured from third-party vendors rather than built in-house. This underscores the importance of robust metrics and oversight in achieving fully trustworthy AI.

 

 

Identified Gaps and Opportunities for Growth Using the NIST AI Risk Management Framework

San José’s AI RMF self-assessment revealed several organizational and technical gaps that could limit the city’s ability to use AI responsibly. While awareness of AI systems was high, the city lacked formal structures and tools to ensure consistent oversight, transparency, and accountability across departments.

Key Challenges Identified:

These insights motivated city leaders to implement meaningful changes, including:

By addressing these gaps, San José is building a more accountable, transparent, and citizen-focused approach to AI governance, setting a benchmark for other municipalities aiming to implement the NIST AI Risk Management Framework effectively.

 

Documenting and Reviewing Findings with the NIST AI Risk Management Framework

To conduct its self-assessment, the City of San José reviewed all 72 subcomponents of the NIST AI Risk Management Framework (AI RMF), evaluating how existing policies, practices, and systems aligned with the framework’s principles for trustworthy AI.

Rather than relying on a formal worksheet or external tool, the city adopted a structured, collaborative approach, bringing together cross-functional stakeholders to examine each area through real-world use cases. This hands-on process highlighted key strengths, including strong system awareness, and uncovered opportunities for improvement in AI governance, accountability, and risk measurement practices.

 

What Other Organizations Can Learn from San José’s NIST AI Risk Management Framework Experience

San José’s case study serves as a practical model for how both public-sector and private organizations can leverage the NIST AI Risk Management Framework (AI RMF) to assess and improve their AI programs.

Key Lessons Include:

  1. Start with what you have: Organizations don’t need to be developing cutting-edge AI to benefit from the AI RMF. Even evaluating existing vendor systems can provide valuable insights into AI risks and governance practices.
  2. Focus on governance gaps: Many of the most significant risks arise from unclear ownership, inconsistent procurement policies, and lack of oversight, rather than purely technical issues.
  3. Use the framework as a maturity tool: The NIST AI RMF is not just for system design, it is also effective for strategic planning, benchmarking, and building internal AI capacity.

By following these lessons, organizations can take a structured, risk-aware approach to AI governance, improving accountability, transparency, and public trust.

 

Final Thoughts: A Blueprint for AI Accountability with the NIST AI Risk Management Framework

San José’s AI RMF self-assessment wasn’t about showcasing perfection, it was about acknowledging uncertainty, identifying risks, and creating a structured path toward greater accountability. This approach represents the foundation of responsible AI governance: proactively assessing where safeguards are needed and identifying areas for improvement before problems arise.

This case study highlights that trustworthy AI is not just a technical objective, it is a strategic commitment to transparency, fairness, and public trust. The NIST AI Risk Management Framework (AI RMF) provides a practical, scalable roadmap for turning that commitment into action. Whether for city governments, federal agencies, or private enterprises, the framework guides organizations in managing AI risks across varied use cases, maturity levels, and organizational structures.

Importantly, implementing the AI RMF does not require building AI systems from scratch. It is equally applicable for organizations relying on third-party vendors or off-the-shelf solutions, as long as robust governance structures are in place to monitor, measure, and manage AI systems responsibly.

Need help implementing the AI RMF or assessing your AI risk posture?
Contact RSI Security today to receive expert guidance on AI governance frameworks, compliance strategies, and risk mitigation, tailored to your organization’s unique needs.

<

Download Our NIST Datasheet


Exit mobile version