The NIST AI RMF "MEASURE" function is where governance gets technical.
But not "write code" technical. NO. I'm talking about the "Ask the right questions" technical. MEASURE = assessing and benchmarking AI risks. It covers 4 categories: →Risk Measurement: How do we quantify AI risk? →Validation: Is the model performing as expected? →Testing & Evaluation: Have we tested for bias, security, robustness? →Documentation: Can we explain our testing methodology? If you are a Governance, Risk & Compliance (GRC) professional, you already know how to measure risk. You've built risk heat maps, scored likelihood/impact, and tracked KRIs. The difference with AI? You need to ask data science teams questions like: "What metrics are you using to measure model accuracy?" "Have you tested for disparate impact across protected classes?" "What's your false positive/false negative rate, and is that acceptable?" "How do you monitor for model drift in production?" These aren't technical questions. They're governance questions applied to AI. Where are you in your AI Governance journey? See you in the comment section.