Three frameworks.
88 intelligent scoring units.
One coach.
Team Operations measures how your team operates. Development Lifecycle measures how you build. Product Assessment measures what you ship. DAC reads against all three at every milestone of the 30-60-90 ritual.
A strong team doesn't guarantee a strong product. A disciplined build doesn't guarantee what you ship is AI-native. An AI-native product doesn't guarantee the team can sustain it. The diagnostic is the gap.
Each framework answers one question.
The answer to each is a stage between 1 (Foundation) and 5 (Compounding). The three answers together are the diagnostic.
Is your team operating at the level it needs to?
Is your team building AI products the way they should be built?
Is your product actually AI-native, or is it AI theater?
One framework is a scorecard.
Three frameworks, read together,
become a diagnostic.
DAC reads all three frameworks together and names the state. Every scored team lands in one of three states, or in an aligned read below Stage 3.
Triggered when Team Operations stage is higher than Product Assessment stage by one or more stages. The Development Lifecycle axis tells you whether the gap sits in how you build or in what you decide to build.
Triggered when Product Assessment stage is higher than Team Operations stage. The Development Lifecycle axis tells you whether you're shipping fast with discipline, or fast without it.
Triggered when all three frameworks score Stage 3 or higher and no gap exceeds one stage. The rarest read. The one every team is trying to reach.
The state is the read.
The ranked dimensions are the priorities.
The next move is the action.
What changes across DAC's 90 days is depth, confidence, and recommendation altitude.
DAC reads against the three frameworks at every milestone. The framework doesn't move. The takes get sharper as DAC accumulates context.
Surface read of all three frameworks. URL-only data.
Day-1 brief: stage estimates, what DAC sees before you say anything.
Stack-OAuth-enabled reads. Real signals from GitHub, Linear, Slack, etc.
Week-1 deliverables: org map, cycle plan, customer-call read, second-opinion.
Cumulative reads. Pattern recognition across 30 days of cycle data.
Probation review: what's sharper, what's still ramping.
Confident reads with cycle-level pushback receipts.
First public pushback to the team.
Org-level reads with quarter-of-data depth.
Graduation report: board narrative, OKR recommendations, pricing/roadmap calls.
How scoring works.
DAC reads signals from 54+ tools your team already uses. GitHub commits, Linear issues, Slack channels, Figma files, Notion docs, PostHog events, Sentry errors, website content. Each signal maps to one or more framework units via a calibrated inference map.
Each of the 88 intelligent scoring units is scored 1 to 5 by an LLM reading the signals against a rubric. The rubric for each unit is public. Every score surfaces the evidence it's based on and the confidence level. Low-confidence units prompt the user to add context, and the unit gets smarter the more you feed it.
Scoring is anchored by five named archetypes per framework. The archetypes are benchmark teams (AI-First Studio, Eng-Heavy Series A, Pre-AI SaaS, AI Wrapper, Compounding Ops) that define what each stage looks like in practice. New scores calibrate against the archetype distribution.
Eleven named patterns across three frameworks cap scores when claims exceed evidence. AI-wrapper theater (UI on top of one LLM call). Document factory (high spec volume, low ship rate). Research without impact (discovery without roadmap effect). Dashboard graveyard (analytics without outcome tracking). Learning theater (claims AI learns but no personalization shipped). The theater check is automatic. You cannot score your way into Stage 4 by claiming what the code does not show.
An initial benchmark cohort of hand-scored companies per framework anchors the distribution. Known AI-native companies, traditional SaaS adopting AI, and design-partner cohort teams. Your score is positioned against the distribution, not against an arbitrary 100-point scale.
Composite scores collapse to a five-stage verb ladder: 0-40 React, 40-55 Augment, 55-70 Orchestrate, 70-85 Lead, 85-100 Compound. Per-framework stages have their own canonical names (Foundation/Building/Scaling/Leading/Compounding for Team Operations; Specify/Context/Orchestrate/Validate/Ship/Compound for Development Lifecycle; Wrapper/Augmented/Integrated/Native/Compounding for Product Assessment). Use composite for board summaries; per-framework for function-level reads.
Framework v1.0 (April 2026) is locked through Q2 2027. Rubrics and stage definitions don't change during v1.0. Signals underneath evolve continuously. Your Stage 2 in April and your Stage 3 in October are comparable because the rubric didn't change underneath them.
Every diagnostic is only as good as its calibration.
DAC's framework was hand-scored against a benchmark cohort of 40+ companies before the LLM scorer shipped. Not a synthetic dataset. Real teams, real stacks, real signal from public-observable sources. The hand-scores define what each stage looks like in practice.
Each cycle, DAC re-reads against the cohort to maintain calibration. When the LLM scorer drifts, the cohort catches it. When a framework unit gets sharper rubrics, the cohort anchors the before-and-after comparison. The cohort is the ground truth the scorer calibrates against.
Why three frameworks.
Single-framework measurement misses the divergence. Engineering metrics tell you how fast. Product analytics tell you what's used. Team surveys tell you what's felt. None of them tell you whether the three are aligned.
Most measurement tools cover one framework. Jellyfish measures engineering allocation. LinearB measures dev workflow. DX (now Atlassian) measures team health. Swarmia measures engineering effectiveness. Amplitude and Pendo measure product usage. Dotwork (a partner, not competitor) measures strategic priority. Each answers one question.
Three frameworks answer a different question: where's the divergence?
Team at Stage 4 and product at Stage 2 isn't the same diagnosis as team at Stage 2 and product at Stage 2. Same product score, different prescription. That's what a single-framework tool can't tell you.
Development Lifecycle is the newest framework. It's what turns two-framework tension analysis (team vs product) into three-framework diagnosis (team vs build vs product). Whether the gap is in what you decide or how you build is a different problem with a different fix.
Three frameworks is the minimum you need to name the pattern correctly.
Two tell you there's a problem.
Three tell you which problem.
The methodology is the depth. The product is the coach.
Free for 30 days. Sign up in 60 seconds. Day 1 starts when you finish.