Overview
This framework guides a structured assessment of organizational AI readiness. It goes deeper than the AI Maturity Model blog post — this is the working document I use in actual client engagements.
An AI readiness assessment answers three questions:
- What AI initiatives are achievable given your current state?
- What investments are needed to execute more ambitious initiatives?
- In what order should you make those investments?
Dimension 1: Data Readiness
Data inventory
Before any AI initiative, catalog your data assets:
| Data Type | Source System | Quality (1–5) | Accessibility | Volume |
|---|---|---|---|---|
| Transactional | ERP | ? | SQL query | ? |
| Customer behavior | CRM | ? | API | ? |
| Operational logs | App servers | ? | ELK stack | ? |
What you’re assessing: Do you have the data that your proposed AI use cases require? Is it clean enough to use? Can you get to it?
Data quality dimensions
For each critical data source, assess:
- Completeness: What % of records have all required fields?
- Accuracy: Spot-check 50 records. What % have errors?
- Consistency: Is the same thing represented the same way across systems?
- Timeliness: How stale is the data? For real-time inference, this matters a lot.
- Lineage: Do you know where the data came from and how it was transformed?
Data infrastructure maturity
| Capability | Ad Hoc | Basic | Systematic | Advanced |
|---|---|---|---|---|
| Data warehouse | No central store | Exists but siloed | Integrated, queryable | Real-time, well-governed |
| Data pipelines | Manual ETL | Scheduled jobs | Event-driven, monitored | Self-healing, documented |
| Feature store | None | Per-project scripts | Shared feature library | Versioned, automated |
| Data governance | Informal | Basic policies | Enforced standards | Automated compliance |
Dimension 2: Team Readiness
Skill inventory
Map your current team:
| Role | Count | Proficiency | Notes |
|---|---|---|---|
| Data engineers | ? | ? | Can they build reliable pipelines? |
| Data scientists | ? | ? | Have they shipped to production? |
| ML engineers | ? | ? | Can they operationalize models? |
| Domain experts | ? | ? | Can they label data and validate outputs? |
The most common gap: Companies have data analysts but no data engineers. Data analysts answer questions about the past. Data engineers build the infrastructure that makes AI possible.
Critical skills assessment
For each person on your data/AI team, assess:
- Can they write production code (not just notebooks)?
- Have they deployed a model to production before?
- Can they monitor a model in production and detect degradation?
- Do they understand the business domain well enough to spot bad model outputs?
Dimension 3: Process Readiness
Experiment-to-production pipeline
Map your current process for taking an AI experiment to production:
- Idea → Approved experiment: Who approves? What criteria?
- Data access: How long does it take an engineer to get the data they need?
- Development: Where does development happen? What’s the standard environment?
- Evaluation: How is model quality evaluated? By whom?
- Deployment: What does deploying a model look like? How long does it take?
- Monitoring: How do you know when a model starts performing badly?
- Retraining: When and how are models retrained?
Most companies that have “AI” in production have no answers for steps 6 and 7.
Decision-making process
- Who can authorize an AI initiative?
- Who decides whether a model is good enough to go to production?
- Who is accountable when an AI system makes a wrong decision?
Dimension 4: Infrastructure Readiness
Compute and deployment
| Infrastructure | Present? | Notes |
|---|---|---|
| GPU compute for training | ? | Cloud OK, doesn’t need to be on-prem |
| Model serving infrastructure | ? | Can you serve inference at scale? |
| A/B testing capability | ? | Can you test model versions against each other? |
| Model registry | ? | MLflow, W&B, or similar |
| CI/CD for ML | ? | Can models be deployed automatically? |
Security and compliance
- Are there regulatory constraints on which data can be used for training?
- Is there a data retention policy that limits training data availability?
- Are there audit requirements for AI decisions?
Scoring and next steps
Rate each dimension on a 1–5 scale:
| Dimension | Score (1–5) | Key Gaps |
|---|---|---|
| Data readiness | ||
| Team readiness | ||
| Process readiness | ||
| Infrastructure readiness |
Score interpretation:
- 1–2: Focus on foundations before any AI initiative beyond simple automation
- 3: Ready for systematic AI in specific, well-scoped domains
- 4–5: Ready for ambitious AI initiatives
The typical finding: Most companies score 2–3 on data and team readiness, 1–2 on process and infrastructure. This is the gap that explains why AI pilots don’t scale.
What to do with this
Use this assessment to:
- Identify the specific investments needed to advance AI maturity
- Prioritize AI use cases that are achievable at your current maturity level
- Build the business case for infrastructure investment
- Set realistic expectations with leadership about AI timelines
The detailed Build vs Buy Framework covers the next decision you’ll face after the readiness assessment.