Connecting poor AI Governance to actual real-world outcomes is a key issue in actually adding value with your AI Governance work. Consider the following questions:
- “What’s the most dangerous employee AI behavior you cannot currently see or control?”
- “Have you had an incident, near miss, or executive concern tied to AI tool usage?”
- “What budget or policy action has already been triggered by this?”
This makes CISO's and others perk up as now we have some potential real-world outcomes we can test for likelihood and impact.
What do you think?
Are these questions useful? What others can we think of?