Why Your Engineers Aren't Using AI Tools (And How to Fix It)

Most companies buy AI tools for their engineering teams and see partial adoption. The problem is rarely the tools. It's the change management layer that nobody planned for.

· 4 min read
ai-adoption engineering-culture change-management developer-productivity leadership

Key takeaways

  • Low AI adoption in engineering teams is almost always a change management problem, not a technology problem.
  • Senior engineers are the most critical adoption vector — if they're sceptical, the rest of the team won't commit.
  • Mandating usage without measuring outcomes produces surface-level compliance, not real adoption.
  • The internal champion model — identifying engineers who have gone deep and making them coaches — is more effective than top-down training programmes.

Every month I talk to a technology leader who bought AI coding tools for the team three to six months ago, achieved broad installation, and is puzzling over why adoption is shallow.

The tools are available. Some engineers are using them enthusiastically. Most are using them occasionally, for specific tasks. A few have quietly formed the view that the tools don’t fit their workflow and have stopped using them entirely.

This pattern is predictable — not because engineers are resistant to productivity improvements, but because the adoption playbook for AI tools is different from the adoption playbook for infrastructure or SaaS tools.

The difference between installation and adoption

When a company deploys a new code review tool or a new CI platform, adoption is mostly a configuration problem. You set it up, you integrate it with existing workflows, you communicate the change. Engineers adapt because the tool is in the critical path.

AI coding tools don’t work this way. They’re adjacent to the critical path, not in it. An engineer can deliver code without ever opening Cursor. The tool only delivers value when the engineer is actively using it and has developed enough familiarity to get good output from it.

This is a change management problem, not a technology problem. And most companies don’t plan for it.

Why senior engineers are the critical variable

The most important factor in team-wide AI adoption is whether senior engineers are genuinely using the tools.

Senior engineers are the technical reference points for the rest of the team. Junior and mid-level engineers look at what senior engineers do and infer what good practice looks like. If senior engineers are sceptical about AI tools — using them rarely, or not hiding that scepticism — the signal to the rest of the team is that serious engineers don’t really use this stuff.

Senior engineer resistance usually comes from one of two places: identity threat (engineers who built their reputation on deep technical skill may feel that AI assistance devalues what they know), or justified scepticism from early bad experiences with the tools.

Both are real, and both require direct engagement rather than a top-down mandate.

The internal champion model

The most effective adoption approach I’ve seen is not training programmes or mandates. It is identifying the engineers who have already gone deep on AI-native workflows and making them the adoption vector.

Most teams have at least one or two engineers who experimented early and found approaches that work. They’ve developed prompting patterns that produce good results. They can articulate where the tools are genuinely useful and where they’re not. They have credibility with the rest of the team.

Give these engineers time and recognition to share what they’ve learned. A weekly 30-minute show-and-tell — informal, not a presentation — where someone shows a specific workflow they’ve improved with AI. Pairing sessions where the champion works alongside a sceptic on a real task. An internal Slack channel where useful prompts and approaches get shared.

Peer-to-peer learning beats top-down training at a ratio of roughly three to one in adoption impact.

Measuring real adoption

If you’re measuring adoption by tool logins or seat utilisation, you’re measuring compliance, not adoption.

Real adoption shows up in outcome metrics. Deployment frequency should increase. PR review cycle time should decrease. Time from first commit to production should shorten. Test coverage should improve, not decline.

If your AI adoption dashboard shows high usage but your engineering velocity metrics are flat, something is wrong. Either the tools aren’t being used on meaningful work, or the team hasn’t yet developed the proficiency to get value from them.

The measurement conversation is also the mandate conversation. “We expect AI tools to improve these metrics” is a more useful frame than “we expect everyone to use these tools.”

What doesn’t work

Top-down mandates without measurement produce checkbox behaviour. Engineers log in, use the tool for trivial tasks, and tick the box.

Pilot groups create a two-tier team. The pilot group develops expertise and habits; everyone else waits and falls further behind. By the time the rollout happens, the cultural gap is significant.

Generic training programmes that don’t connect to the team’s actual work produce generic results. Engineers need to see AI working on their codebase, their problems, their patterns — not a demo app.


If your team is stuck in the plateau between tool deployment and real AI adoption, let’s talk about what it takes to break through.

Free Assessment

Is your team ready to act on this?

Find out where your engineering organisation stands across 7 dimensions — AI adoption, testing, culture, governance, and more. Takes 8 minutes.

Frequently asked questions

Why do engineers resist AI coding tools?
The most common reasons: senior engineers who built their identity around deep code expertise feel threatened; engineers who tried the tools early and had poor experiences formed a negative impression that stuck; and teams where there's no shared standard feel uncertain about whether and how to use AI in their workflow.
How do you measure real AI adoption versus surface-level compliance?
Measure outcomes, not usage. Deployment frequency, time from commit to production, PR review cycle time, test coverage. If AI adoption is real, these metrics move. If it's compliance, they don't.
What does an internal champion model look like in practice?
Identify two or three engineers who have invested in AI-native workflows and are seeing results. Give them time and recognition to share what they've learned with the rest of the team — through show-and-tell sessions, pairing, and async documentation. Peer credibility is more influential than management mandates.
Portrait of Rajesh Prabhu

Written by

Rajesh Prabhu

Fractional CTO & Founder

Rajesh Prabhu is the founder of Seven Technologies and 124Tech. He specialises in AI-first engineering, Harness Engineering methodology, and helping teams operate at a fundamentally higher level of leverage with AI tooling.