Why the question of who owns AI performance separates programs that scale from those that stall

Last week, Gartner released their Q1 2026 AI Transformation Benchmark. The headline: 74% of enterprises still cannot connect AI activity to business outcomes.
$2.3 trillion in AI investment between 2024 and 2026. Three-quarters of it disconnected from measurable business value.
Boards are asking harder questions. Q1 earnings calls showed a pattern: investors want proof that AI moves KPIs, not just deployment metrics. "How many employees use the tool?" doesn't answer "What moved in the business?"
The pressure is shifting from deployment completion to performance demonstration. And most organizations are discovering they track performance religiously, but they don't integrate it.

THE PROBLEM: TRACKING VS INTEGRATING
Most AI transformation programs treat performance tracking as governance.
The pattern is familiar:
Week 1: Program launches. KPIs defined. Measurement framework established. Targets set. Dashboard configured. Everyone agrees on what success looks like.
Weeks 2-12: Weekly status reports generated. Dashboard updated every Monday morning. Metrics collected with religious discipline. Adoption tracked. Performance measured. Everything documented.
Week 12 Steering Committee: "Adoption is at 40% instead of our 60% target. Decision latency increased by 35%. Two high-value use cases showing no ROI. Training completion is 70% but proficiency assessment shows only 30% can actually use the tool effectively."
The Response: "Let's create a task force to investigate root causes. Add this to next month's agenda for deeper analysis. We'll develop a remediation plan and course-correct in Q3."
By the time governance catches the signal, the program has already drifted for twelve weeks. Early warning becomes late intervention.
Why this happens:
Performance tracking answers "what happened." It tells you the score. It documents the gap. It generates the report that goes into the slide deck that gets presented at the steering committee that schedules the task force that investigates the problem that appeared eight weeks ago.
But tracking alone doesn't change what happens next.
Organizations that govern episodically lose an average of 6-8 weeks per correction cycle. Signal appears Week 4, gets captured in the dashboard Week 5, gets reviewed at the Week 8 steering committee, task force activated Week 9, response plan developed Week 10, response activated Week 12, impact measured Week 16.
Twelve weeks from signal to correction.
In AI transformation, where adoption velocity and behavioral shift determine value realization speed, twelve-week correction cycles guarantee the program will miss its value targets. The compounding effect of delayed corrections means small deviations in Month 2 become major value gaps in Month 6.
The 74% track performance and wait for steering committees to respond.
The 26% integrate performance signals into program adjustments the moment deviation appears.
That's the difference.
THE SOLUTION: HOW CPI WORKS IN PRACTICE
Continuous Performance Integration (CPI) doesn't wait for meetings. It routes performance signals into program adjustments the moment deviation appears.
Not tracking. Integration.
THE ORIEN GLOBAL SERVICES CASE
Orien Global Services is a specialty financial services company headquartered in Toronto, Canada with $4.2 billion in annual gross written premium and 12,000 employees across 18 countries. The company operates in commercial property and casualty insurance, specialty lines, and reinsurance.
Their Transform@Orien program was deploying SAP S/4HANA Finance, Procurement, SuccessFactors, and SAP Analytics Cloud with AI-augmented underwriting tools embedded in the platform. The AI layer would analyze historical claims data, market conditions, and risk factors to recommend underwriting decisions that managers would approve or override.
The Stakes:
$142 million program investment over 36 months.
Target value: $175 to $217 million annually at steady state.
Three waves: Canada/UK (Month 24), Europe/US (Month 30), Asia-Pacific (Month 36).
The Architecture:
Three-tier KPI structure with named business owners for every performance metric. CPI configured to route signals from Strategic Alignment, Human Readiness, AI in Practice, and Learning & Recalibration into the 8 OCM capabilities.
Tier 1 Business KPIs: Board-level value metrics with C-suite owners.
Tier 2 OCM KPIs: Change program metrics with OCM workstream owners.
Tier 3 CPI Signals: Real-time governance bridge metrics.
WEEK 6: THE SIGNAL
Strategic Alignment domain SA-6 (Decision Rights Exposure) detected a pattern during Wave 1 deployment (Canada and UK, 3,800 employees):
Decision latency had increased 40% from baseline.
Underwriting decisions that should have triggered AI recommendation acceptance or override in 2 business days were taking 6 days. The governance protocol required underwriting managers to approve AI-recommended risk ratings, but approval queues were backing up across both countries.
The root cause wasn't technical. The AI was generating recommendations correctly. The workflow was configured properly. The problem was behavioral: managers were treating AI recommendations as advisory inputs requiring deep review rather than decision triggers requiring fast approval or override with documentation.
The KPI: Decision cycle time from case submission to underwriting approval.
The Target: 2.0 days average.
The Actual: 2.8 days (Week 6).
The Deviation: 40% above target.
Tier 1 Business KPI owner: Sarah Devlin, Chief Underwriting Officer.
Tier 2 OCM KPI owner: Maria Chen, Regional OCM Lead for Wave 1.
Sarah Devlin owned the business outcome (underwriting velocity). Maria Chen owned the OCM intervention. CPI connected them.
What CPI Did Not Do:
CPI did not wait for the monthly steering committee scheduled for Week 8.
CPI did not create a PowerPoint deck analyzing the problem.
CPI did not schedule a root cause analysis workshop.
CPI did not form a task force to investigate manager behavior.
What CPI Did:
The signal triggered a pre-configured response pathway routing through five OCM capabilities simultaneously on Week 6, Day 2:
CHANGE STRATEGY & VALUE REALIZATION (OCM-01)
Portfolio prioritization adjusted. Two lower-priority training modules (SAP Analytics Cloud advanced reporting, SuccessFactors talent calibration) delayed by one week. Training capacity and schedule redirected to underwriting manager enablement on AI override governance and decision velocity expectations.
Decision owner: Omar Vasquez, Program Director.
Action taken: Week 6, Day 2.
Governance rationale: Training portfolio rebalanced to address the highest-risk adoption blocker before it compounds.
LEADERSHIP & SPONSORSHIP (OCM-02)
Escalation to Sarah Devlin (Chief Underwriting Officer). Executive coalition activated to reinforce manager accountability for decision velocity, not just decision accuracy. Helen Marsh (CEO, Program Sponsor) sent video message to all Wave 1 underwriting managers: "The AI is there to accelerate decisions, not create approval delays. Approve or override with documentation within 24 hours."
Decision owner: Helen Marsh, CEO (Program Sponsor).
Action taken: Week 6, Day 2.
Governance rationale: Manager behavior change requires executive reinforcement, not just OCM communication.
STAKEHOLDER EXPERIENCE & COMMUNICATIONS (OCM-03)
Targeted messaging deployed to underwriting managers in Canada and UK explaining the decision latency impact on Phase 1 value targets. Personalized communication showing each manager's individual queue depth, cycle time vs target, and how their approval velocity compared to peers.
Decision owner: Maria Chen, Regional OCM Lead.
Action taken: Week 6, Day 3.
Governance rationale: Transparency on individual performance creates peer accountability and urgency.
CHANGE MEASUREMENT, INSIGHT & SENSING (OCM-06)
Adoption analytics expanded to track manager-level approval velocity by individual underwriter, case complexity tier, and AI confidence score. Granular signal detection activated to identify which specific managers were creating the bottleneck and whether the issue was systemic (all managers slow) or concentrated (a few managers very slow).
Decision owner: Maria Chen.
Action taken: Week 6, Day 3.
Governance rationale: You cannot course-correct what you cannot measure at sufficient granularity.
CHANGE GOVERNANCE (OCM-07)
Wave 2 sequencing updated. Additional manager enablement sessions scheduled before Wave 2 launch (US, Germany, France, Netherlands) to prevent the same behavioral pattern from replicating in the next wave. Governance protocol adjusted: cases with AI confidence score >85% would auto-approve unless manager overrides within 24 hours with documented rationale.
Decision owner: Omar Vasquez.
Action taken: Week 6, Day 4.
Governance rationale: Fix the current wave and prevent recurrence in the next wave simultaneously.
THE OUTCOME:
Week 8: Decision cycle time = 2.3 days (within tolerance, trending toward target).
Week 10: Decision cycle time = 2.0 days (on target).
Week 12: Decision cycle time = 1.8 days (beating target by 10%).
What would have been a 3-week steering committee investigation → task force formation → root cause analysis → solution design → steering committee approval → activation cycle was resolved in 10 days from signal detection to target achievement.
The program stayed on Phase 1 value targets ($45M annual value by Month 30). Wave 2 launched four months later with the governance protocol already corrected and manager enablement curriculum already adjusted.
Five OCM capabilities activated. One signal. Continuous integration.
No task force required.
HOW PRACTITIONERS BUILD THIS
CPI isn't a tool you buy. It's an operating discipline you design. Here's how:
Step 1: Establish Performance Baseline with Named Owners
Define KPIs with business owners, not just OCM metrics. Every Tier 1 Business KPI must have a named business accountable owner (Sarah Devlin owned decision cycle time because she owned underwriting velocity). Every Tier 2 OCM KPI must have a named OCM owner (Maria Chen owned manager enablement completion rate).
The accountability structure matters. OCM measures, business owns. When a signal appears, CPI routes it to the business owner and the OCM owner simultaneously. They respond together.
Set measurement frequency: weekly minimum for AI transformation programs. Monthly measurement is too slow to catch behavioral drift before it compounds.
Create signal thresholds: When does deviation trigger CPI response? Orien used 20% variance from target as the yellow threshold (monitor closely) and 30% variance as the red threshold (CPI activates immediately). The 40% decision latency deviation triggered immediate red-level response.
Step 2: Build Signal Detection Pathways
Map which signals come from which pillar domain:
Strategic Alignment signals: governance bottlenecks, decision latency, value hypothesis deviation, portfolio prioritization conflicts
Human Readiness signals: role confidence gaps, manager enablement completion, skill proficiency assessment scores, AI trust calibration failures
AI in Practice signals: adoption velocity below trajectory, override rates above threshold, workflow integration friction, performance tracking gaps
Learning & Recalibration signals: performance feedback loop breakdowns, calibration cycle delays, signal response effectiveness
Define where signals route: which OCM capabilities get activated when each signal type appears. Orien had pre-configured response pathways for the 15 most common signal patterns they anticipated during program design. The decision latency signal from SA-6 had a pre-defined pathway: route to OCM-01, OCM-02, OCM-03, OCM-06, and OCM-07 simultaneously.
Step 3: Create Recalibration Protocols
Pre-define response pathways so governance doesn't have to debate what to do when signals appear. The debate about "what should we do if decision latency increases?" happened during CPI design in Phase 0, not during program execution in Week 6.
Multi-capability activation rules: When a Strategic Alignment signal appears, which OCM capabilities activate and in what sequence? Orien's rule: All SA signals triggered OCM-01 (Change Strategy), OCM-02 (Leadership & Sponsorship), and OCM-06 (Change Measurement) as minimum response. Additional capabilities activated based on signal type.
Escalation thresholds: When does a signal escalate to the steering committee vs auto-correct through OCM capability response? Orien's escalation rule:
Tier 1 KPI variance >30% = immediate steering committee escalation
Tier 1 KPI variance 10-30% = OCM auto-response with steering committee notification
Tier 2 KPI variance = OCM manages, steering committee informed at next meeting
The Week 6 decision latency signal (40% variance) hit the immediate escalation threshold, which is why Helen Marsh (CEO) was activated on Day 2.
Step 4: Govern Through Signals, Not Meetings
Steering committees review decisions already made and validate response effectiveness, not debate signals still waiting for action.
The Week 8 Orien steering committee didn't debate whether to fix decision latency. That decision was made Week 6, Day 2 through the CPI response pathway. The steering committee reviewed:
The five OCM capability responses activated
The correction trajectory (Week 6: 2.8 days → Week 8: 2.3 days)
The Wave 2 governance protocol update
The training portfolio rebalancing decision
The committee validated the OCM response, approved the Wave 2 protocol change, and confirmed the correction was on track.
Meetings become governance validation, not signal detection.
Continuous beats episodic.

Does your AI program track performance or integrate it?
If a KPI deviates 30% from target today, how many days until the program changes course?
If the answer is "at the next steering committee," you're tracking. If the answer is "within 48 hours through pre-configured OCM response," you're integrating.

Deloitte's State of AI in the Enterprise 2026 (3,235 global leaders) delivers a clear finding: enterprises where senior leadership actively shapes AI governance achieve significantly greater business value than those delegating governance to technical teams alone. The accountability question is not a soft leadership issue; it is the primary determinant of whether AI delivers.
Deloitte - State of AI in the Enterprise 2026

Next edition: Strategic Alignment in practice. How to build the Value Hypothesis that connects AI action to KPI movement before deployment begins.
The framework practitioners use at Orien to ensure every AI capability has a measurable business case with named owners before any code gets written.
AI Change Loop™ Doctrine - complete framework → Available on Amazon https://lnkd.in/gVQ38tux
If this edition was useful, forward it to one colleague running an AI governance review.
Raheel Malik
AI Change Architect™
aichangeloop.com