I talked to a friend of mine does internal auditor yesterday who said:
"I audit IT controls all day. How is AI governance different?"
My answer:
AI systems fail in ways traditional IT systems don't.
Traditional IT failure:
Server goes down → you lose availability
Database gets breached → you lose confidentiality
AI system failure:
Model makes biased hiring decisions → you face discrimination lawsuits
Chatbot hallucinates legal advice → you are liable for damages
Pricing algorithm violates fair lending laws → regulators fine you millions
The governance challenge isn't just "Is the system secure and available?"
It's:
"Is the training data representative?"
"Can we explain why the model made that decision?"
"What's our recourse when the AI screws up?"
This is why AI governance is its own discipline and WHY internal auditors with traditional IT skills need to be upskilling.