A technology leader I worked with recently pulled up a spreadsheet of his company’s software subscriptions. AI tools alone: fourteen line items. When we went through each one and asked who was actively using it, the answer was: two tools with meaningful adoption, three with occasional use, nine that had been evaluated and quietly shelved.
Annual spend on tools the team wasn’t using: significant. But the subscription cost wasn’t the real problem.
The cost that doesn’t show up on the invoice
When teams have many AI tools without a coherent strategy, a few things happen.
First, no one goes deep. Each tool requires learning — not just the interface but the mental model for when to use it, how to prompt it effectively, and where it fits in the workflow. Spreading attention across twelve tools means nobody develops real proficiency with any of them.
Second, there’s no institutional knowledge. The engineer who figured out the right way to use Tool X for code review leaves, and that knowledge leaves with them. There’s no shared understanding of how AI fits into the team’s work.
Third, the tools don’t compound. In a coherent AI stack, tools reinforce each other — the output of one feeds the input of another, and the team develops patterns for combining them effectively. In a sprawl scenario, each tool is a standalone island.
The result is a productivity plateau. The team is technically “using AI” but not systematically. Usage is fragmented, expertise is shallow, and ROI is hard to demonstrate.
How to audit what you actually have
Pull three data points:
Active users in the last 30 days. Not who has an account — who has logged in and done something with it. For most tools, the vendor can provide this. For tools deployed internally, your identity provider will have the data.
What workflows the tool touches. Is it used for a specific task in a specific part of the team’s process? Or is it available but discretionary? Tools that touch a defined workflow get used. Tools that are “available when useful” usually aren’t.
What the team’s honest assessment is. Ask directly: which tools have changed how you work? Which ones do you open every day? The qualitative feedback usually tells you more than the usage data.
In most audits, two or three tools account for the majority of value. The rest are either redundant, misaligned with the team’s actual workflows, or never got past the evaluation stage.
What a coherent AI stack looks like
The structure I recommend to most teams:
One primary coding tool. Cursor is currently the strongest option for most engineering teams. Make it the standard, not a choice. Consistency in tooling is more valuable than optionality.
One primary reasoning tool. Claude or GPT-4, used for architecture conversations, debugging complex problems, drafting documentation, and high-level thinking. The team should have a shared view on which one and why.
Deliberate automation decisions. For specific workflows — customer support triage, content generation, data processing — decide explicitly what to build versus buy, and measure the outcome. Don’t let these accumulate passively.
Everything beyond this list needs a clear answer to: what does this tool do that the primary tools can’t, and who is responsible for making sure it’s actually used?
The consolidation conversation
Most companies don’t need an AI strategy meeting to fix sprawl. They need an audit followed by a decision: which tools stay, which get cancelled, and which get promoted to team standards.
The consolidation frees up both budget and attention. And attention is the scarce resource.
If your team is navigating AI tool consolidation or wants a structured review of your current stack, let’s talk.
