Ninety days is not a magic number. But it is a useful constraint. It’s long enough to see real change in how a team works, short enough to sustain focus, and specific enough that you can tell at the end whether it worked.
Here is the framework I use for AI-native engineering transformations. It isn’t a rigid script — every organisation starts from a different place — but the phases and sequencing hold across contexts.
Phase 1: Assess (Weeks 1–3)
Before any tool is deployed, the team needs an honest baseline.
What to do:
- Interview engineers at every level: what AI tools are they currently using, how, and what’s getting in the way?
- Audit the engineering environment: CI/CD pipeline speed, test coverage, documentation quality, clarity of service boundaries. These are the foundations that determine how well AI tools will work.
- Identify two or three candidate workflows for the pilot — specific, measurable processes where AI assistance could meaningfully change throughput or quality.
What you’re looking for: The difference between where the team thinks it is and where it actually is. In most assessments, CI pipelines are slower than people recall, documentation is patchier than assumed, and individual AI tool usage is more ad hoc than it appears.
The assessment output is a one-page picture of current state and a ranked list of pilot candidates.
Phase 2: Pilot (Weeks 4–8)
Deploy AI-native workflows to the full team — not a subset — working on real production problems.
What to do:
- Set up the primary AI coding tool (Cursor) as the team standard. Everyone, every discipline, from day one.
- Build the initial governance files: AGENTS.md and guardrails.md for the two or three highest-traffic repositories.
- Establish a weekly knowledge-sharing session. Engineers share what’s working, what isn’t, and how they’re prompting. Peer learning is faster than formal training.
- Start measuring: AI contribution rate, acceptance rate, CI failure rate, deployment frequency. You need the data before Phase 3.
What to watch for: Partial adoption. A team where half the engineers are using AI tools and half aren’t creates friction and makes results hard to interpret. If you encounter resistance, address it directly rather than working around it.
Phase 3: Embed (Weeks 9–12)
The pilot has produced data and visible results. Phase 3 locks in what works and extends it.
What to do:
- Expand governance files to every active repository, not just the pilot ones.
- Formalise the review standards: what acceptance rates suggest active judgment, what does a quality gate look like, how do rejected suggestions get fed back to improve the harness?
- Identify the engineers who have gone deepest on AI-native workflows. They become internal coaches, not a separate AI team.
- Set up the measurement cadence that will continue after the engagement: what gets reported to leadership, how often, and who owns it.
The goal by day 90: The team doesn’t need external support to continue improving. They have the tools, the governance, the internal expertise, and the measurement framework to keep advancing on their own.
What to measure throughout
Three metrics matter more than anything else:
AI contribution rate — what percentage of committed code was AI-generated. This isn’t a target to maximise; it’s a signal of how embedded the workflow has become.
Acceptance rate — what percentage of AI suggestions engineers accepted. A rate above 80% suggests insufficient scrutiny. A rate below 20% suggests the harness isn’t giving agents enough context. The healthy range is somewhere in between.
Deployment frequency — are teams shipping more often? This is the business outcome. Everything else is a proxy.
What this framework is not
It is not a guarantee of specific productivity numbers. Results depend on the starting environment, team dynamics, and how seriously the organisation commits to the process.
It is not a substitute for engineering fundamentals. AI-native workflows compound on top of good engineering practices. They don’t replace them.
If your organisation is planning an AI transformation and wants a structured approach, let’s talk.
