The NIST AI RMF "MEASURE" function is where governance gets technical.
But not "write code" technical. NO. I'm talking about the "Ask the right questions" technical.
MEASURE = assessing and benchmarking AI risks.
It covers 4 categories:
→Risk Measurement: How do we quantify AI risk?
→Validation: Is the model performing as expected?
→Testing & Evaluation: Have we tested for bias, security, robustness?
→Documentation: Can we explain our testing methodology?
If you are a Governance, Risk & Compliance (GRC) professional, you already know how to measure risk. You've built risk heat maps, scored likelihood/impact, and tracked KRIs.
The difference with AI?
You need to ask data science teams questions like:
"What metrics are you using to measure model accuracy?"
"Have you tested for disparate impact across protected classes?"
"What's your false positive/false negative rate, and is that acceptable?"
"How do you monitor for model drift in production?"
These aren't technical questions. They're governance questions applied to AI.
Where are you in your AI Governance journey? See you in the comment section.
4
2 comments
François B. Arthanas
5
The NIST AI RMF "MEASURE" function is where governance gets technical.
powered by
Cyber Pros Community
skool.com/cyber-pros-community-9205
The #1 Free Community for Professionals Breaking Into AI Governance
Build your own community
Bring people together around your passion and get paid.
Powered by