User
Write something
Last Week, Dr. Gorkam joined our Paid AI GRC Practitioner cohort live to share the founding story and walk us through the platform.
Our Cyber Pros Training be cohort of the AI GRC Practitioner Program: https://cy-ber.pro/ai-grc-practitioner had the opportunity to learn directly from Gorkem Cetin, co-founder of VerifyWise, for a special session on AI governance in practice. Dr. Gorkem shared the story behind why VerifyWise was founded, the real-world gaps it was built to address, and why AI governance needs to move beyond theory, frameworks, and policy documents. A big thank-you to Dr. Gorkem and the VerifyWise team for joining our cohort and sharing both the “why” behind the platform and the practical side of how AI governance gets done. #AIGovernance #AIGRC #RiskManagement #Compliance #CyberPros #VerifyWise #ResponsibleAI #AICompliance #GovernanceRiskCompliance
2
0
Last Week, Dr. Gorkam joined our Paid AI GRC Practitioner cohort live to share the founding story and walk us through the platform.
AI systems fail in ways traditional IT systems don't.
I talked to a friend of mine does internal auditor yesterday who said: "I audit IT controls all day. How is AI governance different?" My answer: AI systems fail in ways traditional IT systems don't. Traditional IT failure: Server goes down → you lose availability Database gets breached → you lose confidentiality AI system failure: Model makes biased hiring decisions → you face discrimination lawsuits Chatbot hallucinates legal advice → you are liable for damages Pricing algorithm violates fair lending laws → regulators fine you millions The governance challenge isn't just "Is the system secure and available?" It's: "Is the training data representative?" "Can we explain why the model made that decision?" "What's our recourse when the AI screws up?" This is why AI governance is its own discipline and WHY internal auditors with traditional IT skills need to be upskilling.
6
0
AI systems fail in ways traditional IT systems don't.
The NIST AI RMF "MEASURE" function is where governance gets technical.
But not "write code" technical. NO. I'm talking about the "Ask the right questions" technical. MEASURE = assessing and benchmarking AI risks. It covers 4 categories: →Risk Measurement: How do we quantify AI risk? →Validation: Is the model performing as expected? →Testing & Evaluation: Have we tested for bias, security, robustness? →Documentation: Can we explain our testing methodology? If you are a Governance, Risk & Compliance (GRC) professional, you already know how to measure risk. You've built risk heat maps, scored likelihood/impact, and tracked KRIs. The difference with AI? You need to ask data science teams questions like: "What metrics are you using to measure model accuracy?" "Have you tested for disparate impact across protected classes?" "What's your false positive/false negative rate, and is that acceptable?" "How do you monitor for model drift in production?" These aren't technical questions. They're governance questions applied to AI. Where are you in your AI Governance journey? See you in the comment section.
AI Audit track
Is anyone focusing on or considered the deployed-side AI auditing angle instead of pure GRC? If yes, what's the training looking like for you? Any certs you're considering or already have, etc? Thanks!
Exam
Soooooo I didn’t pass my sec+ exam 😢but I did set a date to retake. Any good resources for like hands on learning?
1-30 of 47
powered by
Cyber Pros Community
skool.com/cyber-pros-community-9205
The #1 Free Community for Professionals Breaking Into AI Governance
Build your own community
Bring people together around your passion and get paid.
Powered by