KPMG Maturity Gap
Companies are betting on agentic AI, yet tech maturity lags; by 2026, execution discipline will become the real differentiator globally.
The KPMG Maturity Gap: why 2026 won’t reward ambition
The most important statistic in KPMG’s Global Tech Report 2026 is not about AI. It is about honesty.
Half of tech executives expect their organisations to reach the top stage of technology maturity by the end of 2026. Only 11% say they are there today.
That gap is not a rounding error. It is the difference between:
Boards funding a future that won’t arrive, and
Operators inheriting a tech estate that becomes harder to run every quarter.
And it is happening at the same time as agentic AI moves from “interesting” to “inevitable”: 88% say they are already investing in building agentic AI into their systems.
So here is the practical question for 2026: how do you stop AI spend becoming a new layer of technical debt?
The uncomfortable diagnosis: maturity is a systems problem
KPMG breaks maturity into stages across ten tech areas and shows a consistent pattern: most firms are “funded and supported” but either “on track” or “hitting blocks”.
Those blocks are not mysterious. The report names three repeat offenders:
Skills shortages: 53% still lack the talent needed to bring transformation plans to life.
Technical debt: 63% say the cost of fixing it is holding back progress on new initiatives.
Risky trade-offs: 69% admit they compromised in areas like security, scalability, or data standardisation to move faster and cheaper.
If you’re reading this and thinking “we can out-invest the problem”, KPMG is blunt: ROI is not linear, and higher investment does not guarantee better returns. Outcomes are shaped by readiness, governance, agility, and execution discipline.
The hidden story: companies don’t fail because they chose the wrong tools; they fail because they can’t scale decisions.
Agentic AI is a workforce decision, not a software decision
Agentic AI is being framed as a capability (“agents that do tasks”). In practice, it is a change in operating model.
KPMG reports that tech executives expect the “digital workforce” to rise significantly in the next two years, reaching 36% of core technology team capacity by 2027 (predicted).
That means two things you can plan around now:
1) You will need an HR system for agents
The report explicitly points to the need for a model/agent registry and “managing AI agents” becoming a key skill.
If you don’t build a registry, you get the corporate nightmare version of agentic AI:
duplicate agents doing the same work,
unknown permissions,
no owner,
no rollback plan,
and a CFO asking why costs went up but cycle time didn’t move.
2) Your biggest risk is not hallucination; it is uncontrolled automation
Agentic AI will touch workflows that have always been protected by human friction: approvals, reconciliations, customer communications, and operational overrides. Once agents can act, your control surface must shift from “after-the-fact auditing” to pre-deployment gates and run-time containment.
If you want a simple north star for agent governance in 2026, borrow language from reliability engineering:
accuracy (does it do the right thing),
latency (how fast),
containment (how often it stays within guardrails),
escalation quality (how well it hands off to humans).
Why most AI programmes can’t “prove ROI” (and what to do instead)
KPMG’s AI ROI section is the part most executives will recognise as real life:
74% say AI use cases provide business value,
but only 24% say they achieve ROI across multiple use cases.
58% say traditional ROI measures are insufficient for AI, and
55% struggle to demonstrate and communicate the value of AI to stakeholders/shareholders.
This is not because leaders are incompetent. It is because AI value arrives in three forms, and finance teams usually track only one:
Hard value: revenue, headcount reduction, unit cost down.
Option value: faster experimentation, shorter time-to-decision, new product surfaces.
Risk value: fraud reduction, fewer incidents, better compliance posture (often the biggest value, and the least “visible”).
The fix: treat AI as a portfolio with explicit value types
A practical move that works in boardrooms: classify every AI initiative as one primary value type (hard/option/risk), then assign metrics that match the type.
Examples that tend to land well:
Option value metrics
time-to-first-usable-output (minutes/hours),
cycle time for a core process (days → hours),
“ideas tested per quarter” (throughput),
cost-to-learn (what did it cost to invalidate a hypothesis).
Risk value metrics
incident rate reduction,
fraud loss reduction,
audit findings reduced,
data access violations prevented,
model/agent containment rate.
Then, only after you’ve stabilised those, push hard value targets.
This aligns with KPMG’s own observation that ROI has “zones”: quick wins early, a slowdown as integration/tech debt bites, and acceleration later once foundations improve.
The “disconnected AI projects” trap is already here
KPMG found 32% of respondents have too many disconnected AI projects and teams with limited coordination or shared governance.
This is the modern version of shadow IT—except now the shadow can act.
A simple governance design that avoids bureaucracy (and actually scales):
The Minimum Viable AI Governance Stack (MV-AIGS)
Single intake for AI/agent proposals
one-page template: user, workflow, data, permissions, success metric, owner.
Central registry (models + agents + prompts + policies)
versioned assets, owner, dependencies, and evaluation results.
Pre-deployment evaluation harness
test on real tasks, plus adversarial checks where relevant.
Shadow mode first
run agent alongside humans, measure deltas, don’t allow write-access until stable.
Runtime controls
scoped permissions, rate limits, tool allowlists, and escalation rules.
Rollback plan for every “write-capable” agent
if your agent can change customer records or trigger payments, it needs a kill switch.
If you do only one thing this quarter: build the registry. It turns “AI adoption” into an asset you can manage.
Technical debt is the silent killer of AI scale
The report makes an important point that many firms still underestimate: the same organisations saying tech debt blocks new investment often still expect big leaps in maturity.
Translation: they are planning to sprint with a broken ankle.
A hands-on approach that works without turning into a multi-year “modernisation programme”:
The Debt-to-Growth Rebalance (90 days)
Weeks 1–2: quantify debt in business language
top 20 systems by operational drag (incidents, lead times, integration pain),
cost of delay (what can’t you launch because of this system),
risk exposure (security gaps, compliance load).
Weeks 3–6: pick 3 “debt paydown” moves that unlock AI
standardise data definitions for one high-value domain,
build one reusable integration layer (APIs/events),
retire one low-value legacy workflow.
Weeks 7–12: create a “foundation release”
not a slide deck: a shipped capability that multiple teams can use.
This matches what KPMG sees in high performers: fewer compromises on security/scalability/data standards, and far less “tech debt fixing costs prevent new investment”.
The 2026 playbook: turning ambition into predictable execution
If 2026 is the year your organisation claims it will become “fully mature”, your plan should read less like a vision and more like a delivery system.
Here’s a practical checklist, aligned to KPMG’s key recommendations, but written for people who have to make it real:
Run a maturity baseline and stop debating vibes
maturity per domain; “funded but blocked” areas become your intervention targets.
Centralise what must be centralised
investment prioritisation, architecture guardrails, value metrics.
Build the agent operating model
registry, owners, evaluations, permissions, incident response.
Measure AI with new KPIs
hard value + option value + risk value, not just ROI.
Kill projects faster
publish kill criteria upfront; treat fast failure as cost control.
Upskill for orchestrating
your scarce talent is not “prompting”; it is workflow design, policy, data engineering, and change leadership.
The bottom line
KPMG’s numbers describe a predictable 2026: companies will buy more AI, announce bigger maturity goals, and still get stuck on the same constraints—skills, debt, governance, and measurement.
The winners won’t be the organisations with the most agents.
They will be the ones with:
one registry,
one set of rules,
one shared language for value, and
a disciplined habit of paying down what slows them down.
That is what “maturity” looks like when the Intelligence Age stops being a keynote and becomes operations.



