The AI Maturity Assessment: Where Does Your Company Actually Stand?

Most companies think they're more AI-ready than they are. This is the framework I use to honestly assess where a company stands — and what it would take to move forward.

· 6 min read
ai strategy assessment transformation maturity-model

Key takeaways

  • Most companies are at Level 1 or 2 on AI maturity — further behind than leadership believes.
  • The biggest barrier isn't technology or budget; it's data quality and organizational readiness.
  • Level 3 (systematic AI) requires foundational data infrastructure that takes 6–18 months to build regardless of how fast you move.
  • The right AI investment for your company depends entirely on where you are in the maturity curve, not what your competitors are doing.
  • Quick wins at Level 1–2 matter: they build organizational confidence and reveal the real data problems.

Every company I’ve worked with in the past two years has wanted an AI strategy. About half of them were ready to actually execute one. Understanding the gap between wanting AI and being ready to use it effectively is the starting point for any serious AI initiative.

This is the framework I use to assess where a company actually stands.

Why most AI initiatives fail before they start

The pattern is consistent: leadership decides to pursue AI, a team is assembled (or a vendor is engaged), three months pass, and the output is either a demo that can’t be productionized or a pilot that doesn’t generalize beyond its test case.

The problem isn’t the AI technology. It’s that the organization wasn’t ready. The data wasn’t in shape. The engineering team didn’t have the right skills. The deployment infrastructure didn’t exist. The business process the AI was supposed to improve wasn’t well-defined enough to automate.

An AI maturity assessment forces an honest look at these fundamentals before any investment is made.

The five levels of AI maturity

Level 1: Ad Hoc

What it looks like: Individual employees using AI tools (ChatGPT, Copilot) for personal productivity. No systematic data collection for AI purposes. No ML in production. Technology team may have experimented with models but nothing has shipped.

What most companies believe: “We’re already using AI tools across the organization.”

What’s actually true: Using AI tools is not organizational AI maturity. The tools are helping individuals be more productive, but the organization’s core processes are unchanged.

How to assess: Ask to see the last three AI initiatives. Where are they now? What was the outcome? Why didn’t they proceed?

Level 2: Emerging

What it looks like: Isolated AI applications in production — a recommendation widget, a fraud detection model, basic NLP on customer tickets. Usually built by one or two engineers who taught themselves ML. The models exist but aren’t well-monitored, aren’t being retrained, and are often quietly failing.

The tell: “We have an AI model but I’m not sure it’s still working well.”

What’s holding companies here: Data is siloed. No one owns the data platform. The engineers who built the models are overwhelmed. There’s no systematic process for moving from experiment to production.

Level 3: Systematic

What it looks like: AI embedded in specific business processes with defined inputs, outputs, and success metrics. A data platform exists. Models are monitored, versioned, and retrained on a schedule. There’s a team (even a small one) that owns AI as a discipline.

The key difference from Level 2: The organization knows how to repeat its AI successes. Level 2 companies can’t reliably ship AI. Level 3 companies can.

What it takes to get here: 6–18 months of foundational data work. This is the hardest transition in the maturity curve.

Level 4: Strategic

What it looks like: AI is a recognized competitive advantage. Multiple AI systems in production across different functions. A dedicated ML/AI team. Active monitoring of the competitive AI landscape. AI factors into product strategy and business model decisions.

The tell: The CEO can articulate specifically how AI creates competitive advantage for the company, not just “we use AI.”

Level 5: Transformative

What it looks like: AI is reshaping the business model itself. Not just improving existing processes but enabling entirely new products, services, or operating models that weren’t possible without AI.

Very few companies are genuinely here. Most companies that claim to be are at Level 4.

The four dimensions I assess

Levels are a useful shorthand, but the real assessment is across four dimensions:

1. Data infrastructure

The questions I ask:

  • Where is your core operational data? In how many systems?
  • Can you query your data without asking engineering?
  • What does your data quality process look like?
  • How long does it take to get data into a format suitable for model training?

What I’m looking for: Whether data can reliably flow from operational systems to wherever it needs to be for AI training and inference. Most companies have this problem worse than they think.

2. Team capability

The questions I ask:

  • Who on your team has shipped ML in production?
  • What’s your ratio of data engineers to data scientists?
  • How does an experiment become a production model at your company?

The most common gap: Companies hire data scientists before they have data engineers. Data scientists without clean, accessible data spend most of their time doing data cleaning — not modeling.

3. Organizational processes

The questions I ask:

  • How does a new AI initiative get started at your company?
  • Who decides whether an AI system goes to production?
  • How do you handle it when an AI model degrades in production?

What I’m looking for: Whether there’s a repeatable process or whether each AI initiative is a one-off heroic effort.

4. Current AI portfolio

The questions I ask:

  • What AI systems are currently in production?
  • How is each one monitored?
  • When were they last updated?

What I’m looking for: Evidence of what Level 2–3 looks like in your specific context. The pattern of past AI initiatives tells you more about organizational readiness than the business case for the next one.

How to use this framework

The output of an AI maturity assessment isn’t a level. It’s a set of specific gaps between where you are and where you need to be to execute the AI initiatives you’re considering.

If you’re at Level 1–2 and you want to get to Level 3, the roadmap involves:

  1. Identifying 2–3 high-value AI use cases that are achievable with your current data
  2. Building the foundational data infrastructure in parallel
  3. Staffing the team in the right order (data engineers before data scientists)
  4. Establishing the operational model for AI in production

If you’re at Level 2–3 and you want to get to Level 4, the roadmap is different — it’s about scaling what works, not building foundations.

The mistake is skipping the assessment and trying to execute at a higher level than you’re actually at.

What to do next

If you’re a technology leader trying to build an honest picture of your company’s AI readiness, start with the four dimensions above. Be honest about the gaps. The companies that make the most progress are usually the ones that admit they’re at Level 1 or 2 rather than pretending they’re further along.

If you want someone external to run a structured assessment — because internal assessments tend to be optimistic — let’s talk.


The AI Readiness Assessment framework in the Handbook goes into more depth on the scoring methodology.

Free Assessment

Is your team ready to act on this?

Find out where your engineering organisation stands across 7 dimensions — AI adoption, testing, culture, governance, and more. Takes 8 minutes.

Frequently asked questions

What is an AI maturity assessment?
An AI maturity assessment evaluates where an organization currently stands in its ability to develop, deploy, and benefit from AI systems. It looks at data infrastructure, team capabilities, organizational processes, and current AI usage to establish a baseline and identify the highest-leverage next steps.
How long does an AI maturity assessment take?
A thorough AI maturity assessment typically takes 2–3 weeks: one week of interviews and data collection, one week of analysis, and a final week to write up and validate findings. The output is a written report with a prioritized roadmap.
What are the main AI maturity levels?
The five levels are: Level 1 (Ad Hoc) — no systematic AI usage; Level 2 (Emerging) — isolated experiments and basic automation; Level 3 (Systematic) — AI embedded in specific processes with measurable results; Level 4 (Strategic) — AI as a core competitive capability across multiple functions; Level 5 (Transformative) — AI reshaping the business model and value chain.
What's the most common finding in an AI maturity assessment?
The most common finding is that data quality and data infrastructure are far behind where leadership believed. Companies often have more data than they can use but it's siloed, inconsistently formatted, or lacks the labels needed for supervised learning. This is the most common bottleneck to advancing AI maturity.
How do you move from Level 2 to Level 3 AI maturity?
The Level 2 to Level 3 transition requires three things working together: a data platform that makes clean data accessible to model training and inference, at least one or two internal people with hands-on ML skills, and an organizational process for moving AI experiments to production. Most companies underinvest in all three and wonder why pilots don't scale.
Portrait of Rajesh Prabhu

Written by

Rajesh Prabhu

Fractional CTO & Founder

Rajesh Prabhu is the founder of Seven Technologies and 124Tech. He specialises in AI-first engineering, Harness Engineering methodology, and helping teams operate at a fundamentally higher level of leverage with AI tooling.