A QCon London talk argues AI has eliminated the foundational tasks that build engineering judgment, creating a long-term leadership pipeline crisis.
At QCon London in March 2026, an engineering leader gave a talk arguing that AI has automated the exact tasks — debugging, ticket-driven coding, small feature work — that historically built senior engineer judgment. Dario Amodei's claim that AI writes 90% of code was challenged: Redwood Research puts repository-committed AI code at ~50%, Google at 25%, Microsoft and GitHub Copilot at ~30%. The core thesis is not about current productivity gains, but about the missing mechanism for developing engineers who can supervise AI systems they never learned to build themselves.
The tasks AI is absorbing — writing boilerplate, debugging, small feature implementation — are exactly the ones that build the mental models needed to catch AI errors at scale. Developers who never grind through those reps are accumulating a hidden skills debt they won't notice until they're asked to supervise a codebase they couldn't have built themselves. The 50% AI-committed-code figure means half your institutional knowledge is now generated by a system with no judgment.
Deliberately take one AI-generated PR this week, strip the AI output, and rebuild it manually — time yourself and note every decision point you would have skipped. This produces a personal skills gap map.
Open a recent PR in your repo that was majority AI-generated
Tags
Signals by role
Also today
Tools mentioned