For CPTOs

The 60-second
board story.

Efficiency and effectiveness in one paragraph, evidence-backed, replayable. The structure that earns the next 4 minutes of board time.

The frontier test

Is your strategy updating as fast as the capability frontier?

Your product strategy got set when Sonnet 3.5 shipped. Sonnet 4.5 is here. The capability frontier moved twice this quarter alone.

If the strategy has not updated, you are strategizing against last quarter's stack. If it updates every cycle, you are rewriting the runway without conviction. The diagnostic catches the lag, names the misaligned function, and surfaces the cycle when conditions shifted.

The design test

Can your design org describe AI-native UX concretely, or is it wrapping chat on 2019 patterns?

“Generative surfaces with structured primitive blocks.” Concrete. “Tool calls embedded in conversation depth.” Concrete. “AI-powered insights.” Wrapping. The difference shows up in what design ships, how the board reads the screenshots, and whether the team can defend the choice in under 30 seconds.

The diagnostic measures whether design has crossed the wrapper boundary or is still describing the same UI in new fonts.

The quality-debt test

What percent of your code is AI-authored, and is eval coverage proportional?

60% of your code is AI-authored this quarter, up from 12% six months ago. Eval coverage that scales with AI-authorship is invisible-debt protection. Eval coverage that lags it is debt accruing without a line item.

The board will ask the first question. The diagnostic answers the second before they finish asking.

The org-shape test

Do your PM:Eng:Design ratios still match the work?

Your ratios got tuned for the work you did in 2023. The work changed twice since.

When agents triple engineering throughput and prototyping tools collapse the design-to-prod gap, the ratio that worked then under-uses the budget now. The diagnostic surfaces it. Headcount decisions update from evidence, not from the last quarterly Notion doc.

The structure
[Efficiency claim, with number] · [Effectiveness claim, with number] · [Closer: what the gap is, and what next quarter does about it]

Three sentences, both numbers, one named tension. Under 60 seconds at a normal speaking cadence. Nothing about transformation programs, swim lanes, or initiative health.

What to say at each stage

The story changes at each stage. The structure doesn’t.

React (composite 0–40)

We shipped X features this quarter (up from Y last quarter). But the team-product gap widened, capability is ahead of what the product reflects. Closing it is next quarter's priority. Here's the named pattern, here's the move.

Augment (40–55)

Cycle time compressed 22% across all six functions, not just engineering. Customer signal-to-roadmap latency dropped from 6 weeks to 11 days. Both numbers move because we instrumented the system, not because we ran a transformation.

Orchestrate (55–70)

Inference cost per outcome down 18% as we hit Stage 3 across Strategy and Insights. Three of four bets validated this quarter; the fourth was killed in week 4 instead of Q3. The team is making fewer, better calls. The system tells me which.

Lead (70–85)

Per-feature unit economics expanded; gross margin trajectory holds at 10x usage. AI-native revenue is now 64% of total. Discovery beats delivery by 1.4x. The compounding loop is running. Each quarter the gap narrows automatically.

Compound (85–100)

Two of our three competitors took over 12 months to ship our Q1 release. NRR is 134%. Agent-originated acquisition is 28% of new logos. The moat is the diagnostic loop, not any single feature. We could open-source the roadmap and still win.
The evidence chain

Every number traces to a dimension. No one is asked to trust you.

Each claim in the worked examples maps to a specific DAC dimension and signal. When the board asks "where does that number come from?", the answer is one click into the audit trail.

Cycle time compression
Trajectory chart, Operations function score over time
Team-product gap (capability vs output)
Cross-framework tension narrative (Team score - Product score)
Customer signal-to-roadmap latency
Feedback Loop Quality dimension + cross-source signal lineage
Inference cost per outcome
Cost Per Outcome dimension + Inference Economics dimension
Bet validation rate
Decision Quality dimension + named pattern transitions
AI-native revenue mix
Product Assessment composite + revenue-by-feature instrumentation
Discovery vs delivery cadence
Research & Discovery dimension vs Delivery Velocity dimension
Agent-originated acquisition
GTM function · Adoption & Expansion dimension · provider-tagged signal

If you also coach the team: Rituals that compound. If you score yourself first: For senior PMs.

Run this template with your numbers.

Get your read in 2 minutes. The numbers populate themselves.

Free for 30 days. Sign up in 60 seconds. Day 1 starts when you finish.