Four hours of documentation a week I'm no longer doing
Six weeks ago I built a KB article pipeline. My team closes 15-20 tickets a week where someone fixed a real problem, documented it in the ticket, and we have no article for it. That adds up. A year of good fixes trapped in closed tickets nobody can find. A Python script hits the ServiceNow API, pulls 30 days of closed tickets, strips PII, normalizes each one into structured JSON; symptom, environment, resolution, notes. Claude drafted the first version. I've been maintaining it since. The LLM stage reads that batch alongside the current article list. For each cluster: does coverage exist, rate it 1-5, draft a new article if the gap score clears a threshold. The threshold is a number I set in a reference file and can change without touching the code. Six weeks in: 23 new articles published, 8 existing ones flagged for revision. About 4 hours of documentation avoided per week. The 60-30-10 here is about as clean as I've gotten it. Normalization, dedup, and article-list comparison are all deterministic, so those live in the script. Rubric and threshold logic live in the reference file. LLM rates coverage quality and writes articles. First version had the LLM doing normalization too. Structure drifted week to week. Moving it into the script fixed that. One thing to flag: it doesn't know when the technician notes are thin. The output quality follows the input quality. I've also re-calibrated the coverage rubric twice. "Quality" is harder to operationalize than I expected. Anyone here run gap detection against a knowledge base outside IT? Sales playbook, legal, anything with a lot of tribal knowledge. The structure should transfer but I haven't tested it.