Every company I’ve worked with in the past two years has wanted an AI strategy. About half of them were ready to actually execute one. Understanding the gap between wanting AI and being ready to use it effectively is the starting point for any serious AI initiative.
This is the framework I use to assess where a company actually stands.
Why most AI initiatives fail before they start
The pattern is consistent: leadership decides to pursue AI, a team is assembled (or a vendor is engaged), three months pass, and the output is either a demo that can’t be productionized or a pilot that doesn’t generalize beyond its test case.
The problem isn’t the AI technology. It’s that the organization wasn’t ready. The data wasn’t in shape. The engineering team didn’t have the right skills. The deployment infrastructure didn’t exist. The business process the AI was supposed to improve wasn’t well-defined enough to automate.
An AI maturity assessment forces an honest look at these fundamentals before any investment is made.
The five levels of AI maturity
Level 1: Ad Hoc
What it looks like: Individual employees using AI tools (ChatGPT, Copilot) for personal productivity. No systematic data collection for AI purposes. No ML in production. Technology team may have experimented with models but nothing has shipped.
What most companies believe: “We’re already using AI tools across the organization.”
What’s actually true: Using AI tools is not organizational AI maturity. The tools are helping individuals be more productive, but the organization’s core processes are unchanged.
How to assess: Ask to see the last three AI initiatives. Where are they now? What was the outcome? Why didn’t they proceed?
Level 2: Emerging
What it looks like: Isolated AI applications in production — a recommendation widget, a fraud detection model, basic NLP on customer tickets. Usually built by one or two engineers who taught themselves ML. The models exist but aren’t well-monitored, aren’t being retrained, and are often quietly failing.
The tell: “We have an AI model but I’m not sure it’s still working well.”
What’s holding companies here: Data is siloed. No one owns the data platform. The engineers who built the models are overwhelmed. There’s no systematic process for moving from experiment to production.
Level 3: Systematic
What it looks like: AI embedded in specific business processes with defined inputs, outputs, and success metrics. A data platform exists. Models are monitored, versioned, and retrained on a schedule. There’s a team (even a small one) that owns AI as a discipline.
The key difference from Level 2: The organization knows how to repeat its AI successes. Level 2 companies can’t reliably ship AI. Level 3 companies can.
What it takes to get here: 6–18 months of foundational data work. This is the hardest transition in the maturity curve.
Level 4: Strategic
What it looks like: AI is a recognized competitive advantage. Multiple AI systems in production across different functions. A dedicated ML/AI team. Active monitoring of the competitive AI landscape. AI factors into product strategy and business model decisions.
The tell: The CEO can articulate specifically how AI creates competitive advantage for the company, not just “we use AI.”
Level 5: Transformative
What it looks like: AI is reshaping the business model itself. Not just improving existing processes but enabling entirely new products, services, or operating models that weren’t possible without AI.
Very few companies are genuinely here. Most companies that claim to be are at Level 4.
The four dimensions I assess
Levels are a useful shorthand, but the real assessment is across four dimensions:
1. Data infrastructure
The questions I ask:
- Where is your core operational data? In how many systems?
- Can you query your data without asking engineering?
- What does your data quality process look like?
- How long does it take to get data into a format suitable for model training?
What I’m looking for: Whether data can reliably flow from operational systems to wherever it needs to be for AI training and inference. Most companies have this problem worse than they think.
2. Team capability
The questions I ask:
- Who on your team has shipped ML in production?
- What’s your ratio of data engineers to data scientists?
- How does an experiment become a production model at your company?
The most common gap: Companies hire data scientists before they have data engineers. Data scientists without clean, accessible data spend most of their time doing data cleaning — not modeling.
3. Organizational processes
The questions I ask:
- How does a new AI initiative get started at your company?
- Who decides whether an AI system goes to production?
- How do you handle it when an AI model degrades in production?
What I’m looking for: Whether there’s a repeatable process or whether each AI initiative is a one-off heroic effort.
4. Current AI portfolio
The questions I ask:
- What AI systems are currently in production?
- How is each one monitored?
- When were they last updated?
What I’m looking for: Evidence of what Level 2–3 looks like in your specific context. The pattern of past AI initiatives tells you more about organizational readiness than the business case for the next one.
How to use this framework
The output of an AI maturity assessment isn’t a level. It’s a set of specific gaps between where you are and where you need to be to execute the AI initiatives you’re considering.
If you’re at Level 1–2 and you want to get to Level 3, the roadmap involves:
- Identifying 2–3 high-value AI use cases that are achievable with your current data
- Building the foundational data infrastructure in parallel
- Staffing the team in the right order (data engineers before data scientists)
- Establishing the operational model for AI in production
If you’re at Level 2–3 and you want to get to Level 4, the roadmap is different — it’s about scaling what works, not building foundations.
The mistake is skipping the assessment and trying to execute at a higher level than you’re actually at.
What to do next
If you’re a technology leader trying to build an honest picture of your company’s AI readiness, start with the four dimensions above. Be honest about the gaps. The companies that make the most progress are usually the ones that admit they’re at Level 1 or 2 rather than pretending they’re further along.
If you want someone external to run a structured assessment — because internal assessments tend to be optimistic — let’s talk.
The AI Readiness Assessment framework in the Handbook goes into more depth on the scoring methodology.
