How DAC coaches

Three frameworks.
88 intelligent scoring units.
One coach.

Team Operations measures how your team operates. Development Lifecycle measures how you build. Product Assessment measures what you ship. DAC reads against all three at every milestone of the 30-60-90 ritual.

Free to start.
◆ Three frameworks
01 · PeopleTeam Operations27 dims · 6 functions
02 · ProcessDevelopment Lifecycle34 tasks · 6 stages
03 · ProductProduct Assessment27 dims · 6 attributes

A strong team doesn't guarantee a strong product. A disciplined build doesn't guarantee what you ship is AI-native. An AI-native product doesn't guarantee the team can sustain it. The diagnostic is the gap.

One team. Three questions.

Each framework answers one question.

The answer to each is a stage between 1 (Foundation) and 5 (Compounding). The three answers together are the diagnostic.

Team Operations
Is your team operating at the level it needs to?
Development Lifecycle
Is your team building AI products the way they should be built?
Product Assessment
Is your product actually AI-native, or is it AI theater?
◆ Team Operations

Is your team operating at the level it needs to?

27 dimensions · 6 functions
Strategy4 dims
Market Intelligence. Decision Quality. Roadmap Discipline. Competitive Positioning.
Design4 dims
Research & Discovery. Prototyping Speed. Experience Design. Design-to-Dev Handoff.
Development4 dims
Architecture & Systems. Spec & Context Quality. Build vs Buy. Delivery Velocity.
Data & Intelligence4 dims
Customer Signal Synthesis. Product Analytics. Data Strategy & Flywheel. Feedback Loop Quality.
Operations7 dims
Knowledge Management. Quality & Experimentation. Team Orchestration. Process & Iteration. Unit Economics. Security & Compliance. Reliability & Resilience.
GTM & Growth4 dims
Positioning & Messaging. Launch Execution. Adoption & Expansion. Pricing & Packaging.
Stages: Foundation · Building · Scaling · Leading · Compounding.
◆ Development Lifecycle

Is your team building AI products the way they should be built?

34 tasks · 6 stages
Stage 1Specify & Constrain
The spec IS the implementation instruction.
Structured spec templates, harness constraints, measurable acceptance criteria, anti-examples for critical paths.
Stage 2Build the System of Context
Your context is your moat.
Context hierarchy, indexing pipeline, multi-model routing, architectural constraints as context.
Stage 3Orchestrate & Generate
Type less. Think more.
Parallel agent delegation, mission control pattern, scope boundaries, token budgets per task type.
Stage 4Validate, Eval & Craft
Truth metrics over vanity metrics.
Eval pipeline before generation pipeline, truth metrics per feature, counter-metric patterns, security scanning, craft review.
Stage 5Ship & Manage Economics
Token budgets alongside cycle budgets.
Cost-per-action tracking, inference cost dashboards, tiered model routing, pricing alignment, pinned model versions.
Stage 6Learn & Compound
Every cycle makes the next one faster.
Post-cycle retros for AI workflow, emergence rate measurement, spec template libraries, cognitive debt tracking, context pruning.
Stages: Specify · Context · Orchestrate · Validate · Ship · Compound.
◆ Product Assessment

Is your product actually AI-native, or is it AI theater?

27 dimensions · 6 attributes
Product Architecture4 dims
Core Integration Depth. Model Strategy. Context Architecture. Agentic Capability.
Adaptive Experience5 dims
Interaction Model. Progressive Disclosure. Adaptive Interface. Confidence Transparency. Human-Product Collaboration.
Learning Systems4 dims
Learning Flywheel. Personalization Depth. Knowledge Architecture. Data Quality & Freshness.
Product Economics4 dims
Cost Per Outcome. Inference Economics. Pricing-Cost Alignment. Value Attribution.
Trust & Reliability5 dims
Hallucination Management. Security Posture. Privacy & Data Governance. Ethical Guardrails. Reliability & Graceful Degradation.
Compound Mechanics5 dims
Network Intelligence. Switching Cost Depth. Expansion Surface. Platform Leverage. Benchmark Community.
Stages: Wrapper · Augmented · Integrated · Native · Compounding.
◆ The tension engine

One framework is a scorecard.
Three frameworks, read together,
become a diagnostic.

DAC reads all three frameworks together and names the state. Every scored team lands in one of three states, or in an aligned read below Stage 3.

◆ Team ahead of product
NOW
Team Operations
S4 · Lead
Capability is real. Stage 4 means leadership cadence and clear coaching loops are running.
NEXT
Product Assessment
S2 · Augment
Product sits at Stage 2. The capability hasn't translated into outcomes the market reads.

Triggered when Team Operations stage is higher than Product Assessment stage by one or more stages. The Development Lifecycle axis tells you whether the gap sits in how you build or in what you decide to build.

◆ Product ahead of team
NOW
Product Assessment
S3 · Orchestrate
Product is shipping AI-native cadence. Output is meeting the bar.
NEXT
Team Operations
S2 · Augment
Team Operations at Stage 2. The org may not sustain this velocity past the current cycle.

Triggered when Product Assessment stage is higher than Team Operations stage. The Development Lifecycle axis tells you whether you're shipping fast with discipline, or fast without it.

◆ All three aligned
Team Operations
S5 · Compound
Stage 5 across the org. Decisions compound across cycles.
Product Assessment
S5 · Compound
Stage 5 in the market read. Outcomes compound the substrate.

Triggered when all three frameworks score Stage 3 or higher and no gap exceeds one stage. The rarest read. The one every team is trying to reach.

The state is the read.

The ranked dimensions are the priorities.

The next move is the action.

◆ The 30-60-90 in framework terms

What changes across DAC's 90 days is depth, confidence, and recommendation altitude.

DAC reads against the three frameworks at every milestone. The framework doesn't move. The takes get sharper as DAC accumulates context.

Day 1

Surface read of all three frameworks. URL-only data.

Day-1 brief: stage estimates, what DAC sees before you say anything.

Week 1

Stack-OAuth-enabled reads. Real signals from GitHub, Linear, Slack, etc.

Week-1 deliverables: org map, cycle plan, customer-call read, second-opinion.

Day 30

Cumulative reads. Pattern recognition across 30 days of cycle data.

Probation review: what's sharper, what's still ramping.

Day 60

Confident reads with cycle-level pushback receipts.

First public pushback to the team.

Day 90

Org-level reads with quarter-of-data depth.

Graduation report: board narrative, OKR recommendations, pricing/roadmap calls.

◆ Methodology

How scoring works.

Signals

DAC reads signals from 54+ tools your team already uses. GitHub commits, Linear issues, Slack channels, Figma files, Notion docs, PostHog events, Sentry errors, website content. Each signal maps to one or more framework units via a calibrated inference map.

Scoring

Each of the 88 intelligent scoring units is scored 1 to 5 by an LLM reading the signals against a rubric. The rubric for each unit is public. Every score surfaces the evidence it's based on and the confidence level. Low-confidence units prompt the user to add context, and the unit gets smarter the more you feed it.

Calibration

Scoring is anchored by five named archetypes per framework. The archetypes are benchmark teams (AI-First Studio, Eng-Heavy Series A, Pre-AI SaaS, AI Wrapper, Compounding Ops) that define what each stage looks like in practice. New scores calibrate against the archetype distribution.

Theater Detection

Eleven named patterns across three frameworks cap scores when claims exceed evidence. AI-wrapper theater (UI on top of one LLM call). Document factory (high spec volume, low ship rate). Research without impact (discovery without roadmap effect). Dashboard graveyard (analytics without outcome tracking). Learning theater (claims AI learns but no personalization shipped). The theater check is automatic. You cannot score your way into Stage 4 by claiming what the code does not show.

Benchmarks

An initial benchmark cohort of hand-scored companies per framework anchors the distribution. Known AI-native companies, traditional SaaS adopting AI, and design-partner cohort teams. Your score is positioned against the distribution, not against an arbitrary 100-point scale.

Composite bands

Composite scores collapse to a five-stage verb ladder: 0-40 React, 40-55 Augment, 55-70 Orchestrate, 70-85 Lead, 85-100 Compound. Per-framework stages have their own canonical names (Foundation/Building/Scaling/Leading/Compounding for Team Operations; Specify/Context/Orchestrate/Validate/Ship/Compound for Development Lifecycle; Wrapper/Augmented/Integrated/Native/Compounding for Product Assessment). Use composite for board summaries; per-framework for function-level reads.

Version Lock

Framework v1.0 (April 2026) is locked through Q2 2027. Rubrics and stage definitions don't change during v1.0. Signals underneath evolve continuously. Your Stage 2 in April and your Stage 3 in October are comparable because the rubric didn't change underneath them.

◆ The calibration cohort

Every diagnostic is only as good as its calibration.

DAC's framework was hand-scored against a benchmark cohort of 40+ companies before the LLM scorer shipped. Not a synthetic dataset. Real teams, real stacks, real signal from public-observable sources. The hand-scores define what each stage looks like in practice.

Each cycle, DAC re-reads against the cohort to maintain calibration. When the LLM scorer drifts, the cohort catches it. When a framework unit gets sharper rubrics, the cohort anchors the before-and-after comparison. The cohort is the ground truth the scorer calibrates against.

40+
Companies in the benchmark cohort
3
Frameworks hand-scored per company
v1.0
Framework version locked through Q2 2027
Why three

Why three frameworks.

Single-framework measurement misses the divergence. Engineering metrics tell you how fast. Product analytics tell you what's used. Team surveys tell you what's felt. None of them tell you whether the three are aligned.

Most measurement tools cover one framework. Jellyfish measures engineering allocation. LinearB measures dev workflow. DX (now Atlassian) measures team health. Swarmia measures engineering effectiveness. Amplitude and Pendo measure product usage. Dotwork (a partner, not competitor) measures strategic priority. Each answers one question.

Three frameworks answer a different question: where's the divergence?

Team at Stage 4 and product at Stage 2 isn't the same diagnosis as team at Stage 2 and product at Stage 2. Same product score, different prescription. That's what a single-framework tool can't tell you.

Development Lifecycle is the newest framework. It's what turns two-framework tension analysis (team vs product) into three-framework diagnosis (team vs build vs product). Whether the gap is in what you decide or how you build is a different problem with a different fix.

Three frameworks is the minimum you need to name the pattern correctly.

Two tell you there's a problem.

Three tell you which problem.

The methodology is the depth. The product is the coach.

Free for 30 days. Sign up in 60 seconds. Day 1 starts when you finish.