A pattern I see consistently in mid-size technology organisations: the leadership team is excited about AI tooling, the engineering team starts using AI coding assistants, velocity increases, and three to six months later a new layer of technical debt has formed — faster than the old one.
AI didn’t create the problem. It accelerated the existing tendency.
What actually changes with AI in the picture
The fundamental nature of technical debt doesn’t change. It’s still the cost of shortcuts taken today that create extra work tomorrow. What changes is the rate of accumulation and the categories most likely to grow.
Speed without standards is faster speed without standards. AI coding tools make engineers faster at writing code. If the codebase doesn’t have clear conventions, a functioning linter, and an established review process for AI-generated output, the increased velocity produces more inconsistency, not less. The debt accumulates at the speed of the tool.
The documentation gap widens. AI-generated code is often technically correct but minimally documented. If the review process doesn’t enforce documentation standards as rigorously as it enforces functional correctness, codebases become harder to reason about quickly. This matters because the people who need to understand the code — future engineers, AI agents working on the codebase later — rely on documentation to get context.
Test coverage diverges. Some teams use AI to generate tests alongside production code. Others use AI to write production code and treat tests as secondary. The second group ends up with a faster-growing, less-tested codebase.
None of these are inevitable. They’re choices. But they’re choices that require active management, not passive adoption.
What AI changes about paying down debt
For specific categories of technical debt, AI is a genuine accelerant to remediation — not just accumulation.
Missing test coverage is the most tractable. An AI agent with good context about a service’s intended behaviour can generate meaningful test cases faster than a human engineer can write them manually. This is mechanical work that humans don’t do well because it’s tedious; AI does it consistently.
Convention inconsistencies are similarly tractable. A codebase with three different patterns for the same operation can be mechanically refactored to a single standard if the standard is specified clearly. AI can execute this across a large codebase much faster than a human team.
Missing documentation — at the function, module, and service level — can be generated from code context. The output needs human review to catch places where the code doesn’t reflect the intent, but it’s far faster than writing from scratch.
What AI can’t resolve is architectural debt: the accumulated cost of structural decisions that made sense in context and now don’t. Untangling a monolith into services, redesigning a data model that grew incrementally into incoherence, replacing a third-party dependency that’s become a liability — these require deep understanding of business context and historical decisions that AI doesn’t have access to.
The modernisation calculus
The more consequential shift is in how organisations should think about legacy modernisation decisions.
The traditional calculus: migration is expensive and risky; maintaining the existing system is cheaper in the short term. Only migrate when the pain of the legacy system becomes intolerable.
The AI-era calculus is different. A well-structured modern codebase benefits from AI leverage in ways that a legacy codebase doesn’t. Engineers working in a clean codebase with good documentation and test coverage get more out of AI coding tools. The feedback loops are faster. The AI-generated code is more consistent. The harness works better.
This means the gap between a team working on a modern, well-structured codebase and a team maintaining a legacy system is growing. The ongoing cost of the legacy system now includes an opportunity cost that didn’t exist before: the AI leverage the team is not getting.
That changes the migration calculation for many organisations that were previously content to maintain.
What this means in practice
If your codebase has high technical debt, the priority before broad AI adoption is setting standards, not deploying tools. Define what the codebase should look like, encode that in linters and AGENTS.md files, and establish a review process that enforces the standard for AI-generated code as rigorously as for human-written code.
If your organisation is weighing legacy modernisation, the AI leverage argument belongs in the business case alongside the traditional cost-of-maintenance calculation.
If you’re navigating technical debt alongside AI adoption and want a structured approach, let’s talk.
