An AI maturity model is a structured tool that helps you assess how ready your organisation is to use AI effectively and sustainably.
Completing one helps you have a benchmark, know where to work, and develop roadmaps towards maturity. It also helps with shared language and understanding, plus create ownership of different maturity goals.
This article includes the following artefacts:
- AI Maturity Model: dimensions grouped by category where you can score your organization’s AI maturity from limited to best in class,
- AI Maturity Evaluation Summary: an output which shows your score overall and by category, making strengths and weaknesses easy to visualize
- AI Maturity Development Canvas: a goal setter and project tracker for you to set AI maturity targets, link actions and set responsibilities
- Implementation guide: how to use and adapt the model within your organizational context
The benefits of using an AI Maturity Model to assess your organization include:
- Benchmarking: score where you are today and identify strengths and weakness
- What good looks like, where: AI is both relatively new, and fast moving. It’s hard to know what should be happening across multiple functions and areas, and harder therefore to have a conversation about what to do
- Identify gaps and risks: where you have capability gaps or potential trust and governance weaknesses
- Create a development plan and tracker: identify which areas you’d like to invest in, what you’d like to score and how you’ll get there
We’ll take you through why and when to use an AI maturity model, how to use it to assess your organization, and how to treat it as a living document, as the landscape both outside and inside your company evolves.
Not every organization will score “Level 5” across the board. This model is designed to identify which dimensions matter most to your team and support your goals with a development canvas for your priorities.
We provide a full AI maturity model, evaluation scorecard and development canvas that you can evolve and adapt to your context.
The Hustle Badger AI Maturity Model
Why use an AI maturity model?
An AI maturity model creates transparent expectations of what AI skill level and behaviors look like across multiple teams, tasks and levels of the hierarchy.
It helps make AI planning concrete, and take it to an actionable plan to help the organization benefit. It can clarify where investment is needed in business infrastructure, capabilities and tools.
This is especially useful in a new area like AI where different skill levels and functions have different ideas and beliefs about what good looks like.
In addition AI comes with customer and internal risk factors and requirements. Completing an AI maturity assessment exercise creates a benchmark for leadership around the organization’s readiness to take on AI tasks, gaps and the potential risk register when it comes to ethical AI and governance.
The benefits include:
- Shared language: enabling teams across different functions to talk about AI capability with clarity.
- Gap analysis: identifying where you’re strong, where you’re weak, and what’s missing
- Business case for needs: enthusiasm for the possible benefits of AI is translated into actionable investment areas, be it time, upskilling or infrastructure
- Capability mapping, hiring and upskilling plan: allows you identify areas where you need to hire new profiles
- Onboarding: Explains to new joiners what is expected of them in terms of AI mindset, skills, and the organizational AI roadmap and culture
- Foundations: protocols, governance and infrastructure required to implement, scale and monitor AI while managing risk
We believe that defining levels of AI preparedness and competency is particularly useful because
- Nascent area: AI is new and fast moving. There’s significant variance in how people are deploying it, what good looks like from a skills and governance perspective, and where people should invest to succeed. There’s not yet a playbook, and thus companies have to invest in their own.
- Internal audit: there can be confusion within companies about why AI is not being deployed, when there may be very rational reasons (data, governance, skills, budget). This type of exercise can clarify where the blockers are and help the organization overcome them.
The model can help you avoid other pitfalls when it comes to adopting and deploying AI, such as:
- Passive resistance: Cultural issues, which need persuasion, upskilling, hiring and performance improvement plans.
- Lack of centralised plans for AI: Fragmented, ad hoc initiatives across the org, with no centre of gravity
- Lack of focus or effectiveness: Pilots or MVPs which stall at hurdles, or go live with no value tracking; or members of the organization not making time to upskill
- Insufficient infrastructure: Insufficient investments in or understanding of the required infrastructure to scale and iterate
- Data hygiene and accessibility: Issues with data quality, infrastructure or successful collection and deployment of key data
- Governance and trust risks: Limited understanding of ethics and risks results in moving faster than governance and oversight, creating hazards
Our AI maturity assessment is part of a suite of management tools which can meaningfully improve how you set strategies, manage teams and fill capability gaps:
Build with AI Cohort
Limitations and downsides
As with any other tool or template, the value of this AI maturity model depends on your business context and the internal culture and appetite to invest in this exercise.
Watch out for the following downsides:
- Heavy lift: investing in increasing AI capability across an organization requires significant cultural, competency, tooling and capability transformation. If you’re not ready to do that, don’t move forwards
- Fear factor: in less AI mature organizations rolling out this type of exercise can create layoff fear, or push back. You need a comms plan and change management process in preparation for this.
- Increased bureaucracy: The purpose of the model is to create clear benchmarks and give guidance. Increased clarity should be helpful but it can also be restrictive. It’s important to understand that you will face rigidity and to combat it with flexibility.
- Lifecycle and maintenance: AI is a fast moving space, and maturity models and spectrums can quickly go out of date, or miss new functionality. It’s important to keep evolving the model, and invest in maintenance and iteration, plus re-scoring exercises
- May require budget: you may find as you go through the model that investment is required to increase capability: in people, engineering processes, data or systems.
Used well, the benefits of using a model like this should outweigh the people risks and investments, but it’s important to sign up in advance to these.
When to use an AI maturity model
Using an AI maturity model is business context dependent.
But broadly, the best time to use an AI model is when:
- The organization is ready to engage with AI: when there’s enthusiasm or interest in developing AI capabilities, sponsored by the C-suite, either for cost or competitive advantage
- You want to set a baseline: you want to understand where you are today, versus what good looks like
- To run alignment sessions: across different teams in the organization
- You need a structured plan: which capabilities to develop and where
- As part of OKR or strategic goal setting: setting goals across functions
Before you move forwards you’ll want to review this model and the implications with key stakeholders, like the C-suite and the HR function. It’s key that they are informed and are advocates and sponsors.
What is an AI maturity model
An AI maturity model is a tool for assessing an organization’s AI maturity:
- Cultural acceptance and interest in AI: how all levels of the organization approach and interact with AI
- Skillset and capabilities to work well with AI: technical capability and new skillsets, such as prompting and working with agents
- Infrastructure to be able to deploy AI successfully: data infrastructure, experimentation and deployment capability, plus governance and trust protocols
At its simplest it’s a big grid of categories and dimensions, scored from low to high, dependent on the behaviors or competencies exhibited.
Our model is split into six categories, with 20+ dimensions in total. Each dimension is further divided into 5 levels from low to high, where specific behaviors of groups at those levels are described.
Categories and dimensions of AI maturity
We’ve described different dimensions of AI maturity, and grouped them into categories.
The five categories and 20+ dimensions we score AI maturity across are:
Strategy & Leadership
The strategy and leadership category assesses whether leadership are building their own capabilities, incorporating AI into strategic goals, and making AI investment a priority.
The specific dimensions we focus on are:
- Leadership on AI: the levels of AI literacy and competency within the exec suite, and how they lead from the top on AI internally, in terms of direction and culture
- Strategy & Budgeting: whether AI is included in strategy, tied to business goals and company ROI, as well as appropriately funded
People & Culture
The people and culture category is an assessment of whether people are confident, capable, and structured to deliver AI work reliably.
The dimensions we review are
- AI Culture & Talent: the organization’s mental model when it comes to AI
- AI Learning & Enablement: how people upskill and the infrastructure that exists to support them
Using AI
The Using AI category assesses how AI shows up day-to-day in internal skill level: from using ChatGPT to embedded features in internal tools.
The dimensions are:
- Prompt-Based Productivity: Prompt literacy, knowledge accumulation and competency within an organization
- AI Agents & Automation: integration of AI into internal systems and software, plus the ability to create and deploy agents
Product Development & Delivery
The Product Development & Delivery category covers how AI maturity is demonstrated in product development and product innovation.
The dimensions are:
- AI in Product Development: Inclusion of AI into product features and strategies
- Product Value from AI: AI features aren’t just delivered to customers, but real, measurable user value is derived from them
Data Readiness
This category is all about how underlying data foundations enable and support AI value delivery.
We review:
- Data Collection, Accessibility and Readiness for AI: how data is collected and cleaned to create a reliable data foundation how data is served and delivered, where it is stored and how it can be utilized for AI
Governance & Trust
This category is all about reducing risk, creating protocols and ethical use and deployment of AI.
- AI Governance, Risk & Compliance: protocols, risk registers and oversight for AI use
- AI UX & Trust Design: consideration of the end user experience when interacting with AI products, ethical UX
Scoring levels
Each dimension has five levels:
- Limited: the level of AI knowledge and capability is low
- Reactive: the organization reacts to AI challenges on an ad hoc basis
- Developing: the organization is beginning to develop systemic level AI capability but has a way to go
- Embedded: AI knowledge and capability is systematized and strong
- Best in-class: the organization is fully AI native
The model scores these from 1-5 (1= Limited, 5= Best in-class) and each level includes specific, observable behaviors.
For example, for the dimension assessing Leadership on AI, at different levels, leaders might demonstrate these differing behaviors:
1 – Limited
“Leaders lack awareness of AI’s capabilities and risks. They avoid using AI tools personally and express skepticism about hype publicly. No messaging around AI internally, and teams are neither instructed on the company’s AI position or encouraged to explore.“
Whereas at the highest level they might demonstrate these behaviors:
5 – Best in-class
“Execs treat AI fluency as cultural infrastructure. Leaders champion initiatives like ‘AI-first teams’, internal AI showcases, and measurable AI adoption goals. They drive AI capability building for managers and ensure leadership pipelines prioritize AI fluency. AI first hiring as a practice is mandated.”
We’ve used similar language to that used in competency frameworks to describe the systems and behaviors at every level of maturity.
That means that the descriptions of maturity level in our assessment are phrased to incorporate
- Behaviors: how people act and behave when it comes to AI topics
- Skills & capabilities: hard and soft skills and distinct, technical capabilities
- Infrastructure: the systems, structures and softwares available to support AI maturity
Here’s an example:
Category: Using AI
Dimension: AI Agents & Automation
Level: 5=Best in-class
Level description:
“Agents operate autonomously across workflows. Teams version, deploy, and monitor agents like microservices. Internal tools include intelligent suggestions, triage, and routing. AI improves accuracy, reduces latency, and triggers next steps automatically. Feedback loops exist for retraining and optimization.”
Adapting the model
You may want to adapt this AI maturity model to your company’s specific context.
Perhaps you want to see AI specific behaviors from a certain function, or you want to focus more on an area like ethics and governance due to your specific industry context. Perhaps a new AI breakthrough has occurred, requiring new focus or new capabilities and you want to reflect it in your maturity planning.
In order to adapt it, follow these steps:
- Define dimensions
A dimension is an aspect of a category that you want to assess.
For example, if the category is Strategy & Leadership, a dimension of that category you might want to assess could be AI Budgeting and Investment, since if leaders don’t make space for AI costs as part of their business planning, then it can’t truly be said to be part of their strategy.
Dimensions should be
- Comprehensive: cover all aspects of displayed maturity
- Discrete: shouldn’t overlap with other dimensions
- Easily understood: the reader should grasp what is meant intuitively
- Key: an aspect of the category which you believe to be critical to fulfilling that goal
Dimensions also do not need to be comprehensive. You can choose to delete components as well as adapt or add. It’s all about what you believe is critical for your organization to develop, depending on your strategic goals, and risk appetite. You might well want to focus on less versus more in order to make change effective.
- Describe the skills and behaviors by level
Your primary goals here are to ensure the reader intuitively understands what this dimension level looks like, and to start a conversation across the organization about what’s required.
Therefore don’t sweat this bit too much: try to write a tight description of what you’d like to achieve, but know that it will be iterated and edited by the collective to get to an agreement.
- Adapt as AI iterates
Given the pace of AI development, and the evolving landscape as new use cases are identified, it’s likely that you will have to adapt the model for new scenarios.
Set a regular cadence for checking that your AI maturity model is still relevant: via an internal council or dedicated review time, and adapt as necessary.
How to use the model
The purpose of this model is to
- Provide a baseline: help you understand where you are today
- Set and track goals: decide where you’d like to be tomorrow
- Be used iteratively to score progress: help you set goals and then score your progress in cycles to understand where you get to tomorrow
How to score the model
Create your own copy of the model, save it, and on the first tab, AI Maturity Model, where the categories, dimensions and levels are described, you can select a score for your organization in the column marked ‘Company Score’.
Select your score from the drop down on the AI Maturity Model tab
Once you do this, the second tab, Evaluation summary, will auto-populate with your scores to give you a summary view of strengths and weaknesses, plus collated scores across categories.
Scores will auto-populate on the evaluation tab
You can then identify categories or dimensions where you are strong, and areas where you’d like to improve.
It’s important to understand that it’s not necessary to be best in-class in every area.
For example, your context might lend itself better to focusing on internal AI adoption than pushing AI products out to users.
This might mean that you intentionally continue to score low in AI in product development. Your goal is to benchmark, and then decide which dimensions it makes sense for your organization to develop – making this part of a concrete AI maturity plan, and merging it into company strategy and goals.
How to use the development canvas
Once you’ve decided which dimensions are important, move to the AI Maturity Development Canvas tab.
Select the dimensions you want to focus on and the target level you want to get to. The behavioral definitions will auto-populate in the columns. Next, input who is responsible for improving the level, what actions they will take and by when they need to deliver.
Populate the marked columns with reasons, actions, responsible and deadline
How to do an AI maturity assessment
An AI maturity assessment is best done collaboratively with clear communication at every stage of the process.
The organization should own the model and the assessment output to get the best results. Passive resistance or ‘not another thing’ are very real risks, which make it critical to engage the business early on.
Set context
As with any major business focus area, it’s key to explain why you’re doing this, and what outcomes you hope to achieve.
Most leaders seek to do an AI maturity assessment in order to
- Maintain business competitiveness: via speed to market, innovation, or internal efficiency
- Greater understanding of a new technology: to know what they don’t know, in case use cases present themselves
It’s important to be clear on your objectives, and the place this should have in your company priorities.
The best place to do this is a company all hands so that the organization all hears the same messaging, and clearly understands what is coming next. You can back this up with an internal memo.
Given very real fears and concerns about AI in the market, you should additionally come prepared to respond to questions about changing roles, hiring, job cuts, or ethical and governance concerns. Remember to leave plenty of time for Q&A.
Show the model
The next step is to share the model and get feedback from the organization. You can do this in a number of ways:
- Form: Host the model centrally and collect form based feedback
- Working group: Create a cross-functional working group with delegates from every department and level of the hierarchy to own the model on behalf of the organization, manage communications and act as ambassadors
- HR reviews: Work with HR to run sessions per department: formal reviews in terms, creating a safe space for them to raise topics
Techniques like working groups, and HR led sessions take longer, but will enable you to get the best feedback, and allow people to feel heard.
If you’re not rolling out the model across the company, you can also use it within your team, running an internal workshop to achieve the same outcome.
Iterate with stakeholders
Post feedback round, it’s likely that you’ll need to adapt the model. Once this is done, you’ll want to do several rounds of iteration with the company, to get to a version that they have all signed up to.
Best way to do this:
- Cross functional working group: If you’ve set up a cross functional working group, leverage this community to adapt the model, and share next steps in their departments as model champions
- Company wide model run through: display the iterated model in an all hands, explain the additions, and make sure it’s publicly accessible afterwards
- Live workshop: [If in a team or smaller group] iterate the model live in a workshop, and share back polished version after the session
Feedback and iteration sessions serve a positive purpose:
- Create buy in: transparency will alleviate concerns, while also actively creating some champions – either via a formal working group process, or informally, as AI enthusiasts within the company feel empowered to speak up
- Explain context: make it clear why you are including or excluding certain dimensions, and which really matter to your business context
- Make the scoring round faster: by the time you get round to scoring, everyone should understand what the model is and how to work with it
The key thing here is to set a deadline, and formally ‘lock’ the model after that date. You don’t want this to be an infinite exercise of feedback, at some point you just want to get started.
Score the model
You can score the model in a few different ways:
- By survey: every individual in the company can score the dimensions, leave comments, and scores are aggregated at the end
- By department: departments meet, discuss the scores, and submit a combined score and notes at the end
Both are opportunities to collect rich feedback and get insight into what’s happening on the ground.
A top down exercise where the exec score alone is not recommended if you want full organizational collaboration, but can be very effective:
- As a preparatory exercise: to understand where you see gaps as an exec, how the exec wants to lead, and where you might want to steer company focus
- As a comparison: where the exec team can compare their perceptions to departmental outcomes
Share the results
Play back the total scores to the company, and provide a summary analysis of any different findings between the departments, plus any anonymized or aggregated qualitative feedback on levels.
It’s important for scoring and findings to be transparent, in order to set the stage for what comes next.
A mix of all-hands and departmental deep dives led by function owners are best.
Set AI Maturity Goals
Once you have your scoring, the next step is to set goals, and timeframes by which you want to achieve those objectives.
In order to set goals
- Analyze critical gaps: Identify which dimensions you want to improve by doing a gap analysis, selecting dimensions where lack of AI capability is slowing down the organization or exposing you to competitors. If one dimension is very critical, consider breaking it down into sub-topics.
- Set priorities: select a short list (3-5, maximum) dimensions to improve. Try to identify dimensions where maturity will unlock progress elsewhere. For example, if your organization is weak on prompting, you should start with prompt capabilities before moving onto agentic workflows.
- Set the benchmark you want to achieve: if you’re a 2, perhaps you’d like to become a 3 in the next xx months
- Set the timeframe: dependent on your runway, goals can be more or less aggressive. It’s unlikely that you can jump from Reactive to Good status in 12 weeks, without serious focus.
- Identify what the next step looks like: the skills and behaviors evidenced at the next benchmark level
- Plan to get there: break down the skills and behaviors into actions, be they training requirements, workflow iterations or cultural shifts. Consider how you will measure progress.
- Assign ownership and reporting responsibilities: set up a RACI, and assign a project owner who owns the KPI, is empowered to deliver progress, and who routinely reports on progress
To help structure this process, use the AI Maturity Development Canvas, set up for you to populate with these inputs and use as a tracker:
Assess routinely
To be most effective, you should treat your AI maturity model as a living document, and a tool to have conversations with your team, rather than a one and done survey form:
- Project tracking: Use the development canvas as a project tracker and check in on status routinely
- Routine scoring: Re-assess the organization on a defined cadence, ideally inline with goal setting: i.e. if you’re an OKR led organization, set AI maturity OKRs, and re-score post quarter end to assess achievement
- Create shared AI maturity rituals: such as model and canvas reviews and capability retros where errors occur
- Continuously adapted: Update dimensions as AI develops and new capabilities emerge:
- Collective responsibility: Shift ownership of the model to the organization as AI adoption increases and enthusiasm embeds
Overall an AI Maturity model isn’t a badge collection exercise, but a way to stay aligned and accelerate AI adoption and value within your organization.
Signs you’re using the model well
Some indications the model is effective include:
- Increased understanding of capability and goals: where people today, versus where they need to be tomorrow
- Honest conversations about capability: within teams, within the organization, and when it comes to hiring and budgeting decisions
- Alignment between strategic aspirations and enablement: understanding that goals are not currently achievable, proper investment (time, money, education) in those goal
- Clear pathway towards AI maturity: the organization has a plan and understands what it has to do to get there
- Influence on OKRs, hiring plans and platform priorities: findings are being deployed
- Well understood trade offs due to identified maturity levels: for example, delaying deployment of AI features due to low governance scores
Wrap up on AI maturity model and assessment
An AI maturity model is a tool to kick off conversations and actions with the aim of increasing AI capability across multiple dimensions and levels of the organization.
By providing a structure and a benchmark of current capability it can help you navigate from today to tomorrow. Given AI’s impact on job functions and day to day tasks it can help you stay current, adapt as required and not be left behind.
As with all tools, it will only be as effective as the engagement with it within the company. It’s critical to
- Explain context and goals: why you’re doing this, and what the benefits will be for everyone involved
- Adapt and co-create: the model should be collectively owned by the organization in order to be a success
- Make scoring and actions as clear as possible: eliminate grey areas and confusion by making desired skills and behaviors clear.
Used well, an AI maturity model is designed to be deleted, once AI mature behaviors and capabilities are embedded, and the organization is well on its way to be AI-native.
Hustle Badger Resources
[Courses]
[Template]
[Articles]
FAQs
What is an AI maturity model?
An AI maturity model is a structured tool that assesses an organization’s AI behaviors, skills and infrastructure in order to determine how they work with AI now, and how they might want to evolve capabilities in the future. It’s primarily a project management and conversation starter, designed to get an organization thinking about different AI dimensions and behaviors.
Where can I find an AI maturity model template?
We’ve created an AI maturity model template, evaluation visual, and an AI maturity development canvas here. Simply make your own copy, adapt, score and use as required.
Can we adapt your AI maturity model?
Yes. Add, delete or change dimensions to reflect your strategy. You can also edit the competency and behavioral definitions at each level.
Do we need to be technical to use this AI maturity model?
No. The model describes behavior and capability, not technical architecture.
Can this be used for performance reviews?
While you can adapt behaviors and skills described in the model to be reflected in competency frameworks, this model is designed to set an organizational benchmark, and roadmap towards maturity, rather than individual development goals.