Developers at your company use an LLM-powered coding assistant that auto-generates functions pulled into production via CI/CD. A recent audit reveals several generated functions contain hardcoded credentials and insecure deserialization patterns. What should the security manager prioritize FIRST?
A. Ban the AI coding assistant until the vendor eliminates hallucinated vulnerabilities
B. Require developers to manually review all AI-generated code before committing
C. Integrate automated AI security testing into the CI/CD pipeline to catch flaws pre-production D. Report the insecure patterns to the LLM vendor for model fine-tuning