A 1983 Paper Predicted Everything Going Wrong With AI Tooling Today
Bainbridge's "Ironies of Automation" maps onto AI-assisted engineering with uncomfortable precision Uwe Friedrichsen's two-part series on Lisanne Bainbridge's 1983 automation research caught attention on HN this week. Bainbridge highlighted a key irony: as automation grows, the remaining human tasks become more complex. Yet, people invest less time in these tasks and, as a result, lose skills when they’re needed. With AI coding agents, relying on Claude Code to write and modify code means you understand your codebase less. Anthropic researcher Margaret-Anne Storey recently described this as "cognitive debt." It refers to the gap between what teams' AI-assisted code achieves and what they truly understand. Steve Yegge mentioned he limits his coding with AI to four hours daily due to cognitive fatigue from managing AI output instead of creating it. (Published December 2025; resurfaced on HN this week.) Why this matters: This is not just a theory; it’s a looming risk. Teams rapidly developing AI-assisted systems are accumulating cognitive debt. This could cause unexpected outages, security issues, and difficulties diagnosing failures. The key takeaway for engineers: understanding AI-generated code is essential, not just optional cleanup. Asking the LLM for clear explanations alongside technical output can help reduce these risks. Click here to read more → https://www.ufried.com/blog/ironies_of_ai_2/