Three frameworks. One system.

The intelligence engine behind every ✦ DAC-Score

Three original frameworks define what AI-native means for your product, your process, and your operations.

Original research by Darren Card

Most B2B SaaS companies that claim to be AI-powered can't answer three basic questions: How mature is our product? How should we build? How should our team operate? Think of it as DORA metrics for the entire product team.

Where you are
AI-Native SaaS Maturity Framework
5 stages 10 dimensions 4 clusters
Value Prop, Architecture, Data Strategy, UX, Pricing, Team Structure, Build vs Buy, Iteration Speed, Competitive Moat, Feedback Loop
How you build
AI-Native Development Lifecycle
6 stages 34 tasks 3 concerns
Specify & Constrain, Build the System of Context, Orchestrate & Generate, Validate, Eval & Craft, Ship & Manage Economics, Learn & Compound
How you run
AI-Native Product Operations Framework
10 dimensions 5 stages 6 functions
Strategic Intelligence, Design & Prototyping, Spec & Context, Dev & Delivery, Customer Intelligence, Product Analytics, Quality & Experimentation, Team Orchestration, Positioning & Messaging, Launch & Adoption

The interactive assessment is the entry point. It bridges all three frameworks.

Where you are on the AI-native spectrum

5 stages of progression, 10 dimensions of capability. Most products that claim to be AI-powered score in the bottom two tiers.

01
Legacy
10 – 15
02
AI-Curious
16 – 21
03
AI-Enhanced
22 – 27
04
AI-First
28 – 33
05
AI-Native
34 – 40

What gets measured

01
Value Proposition
Is AI the reason customers buy, or a checkbox on the feature list?
02
Architecture
Was the system designed for AI, or is AI grafted onto legacy foundations?
03
Data Strategy
Does every user interaction make the product smarter?
04
User Experience
Is AI the interface, or a sidebar feature?
05
Pricing
Does the pricing model capture the value AI creates?
06
Team Structure
Is AI expertise embedded across the org, or siloed in one team?
07
Build vs. Buy
Do you own the AI components that differentiate, and buy the rest?
08
Iteration Speed
Can you ship AI improvements multiple times per day?
09
Competitive Moat
Does your AI advantage compound, or can it be replicated in a weekend?
10
Feedback Loop
Is AI quality a core product metric with systematic improvement?

How dimensions move together

Foundation
Architecture + Data Strategy + Feedback Loop
Move together or not at all. The technical bedrock.
Market Position
Value Prop + Pricing + Competitive Moat
How you position, price, and defend your AI.
Execution Engine
Team Structure + Build vs. Buy + Iteration Speed
Team capability determines the ceiling.
Outlier
User Experience
Most teams advance UX first, and advance wrong.

How AI-native products actually get built

The traditional SDLC is dead. 6 stages for building AI-native products, grounded in research from Sequoia, a16z, Bessemer, and Anthropic.

01
Specify & Constrain
The spec IS the implementation instruction. Define boundaries. The harness matters more than the prompt.
02
Build the System of Context
Context engineering replaces architecture docs. Your context is your moat. Model selection per task.
03
Orchestrate & Generate
Type less. Think more. Parallel agent delegation. You manage scope, not syntax.
04
Validate, Eval & Craft
Human review. AI testing. Eval pipelines. Truth metrics over vanity metrics.
05
Ship & Manage Economics
Token budgets alongside sprint budgets. Cost-per-action sits next to sprint velocity.
06
Learn & Compound
Feed outcomes into context. Every cycle makes the next one faster. The loop tightens.

What runs through every stage

Token Economics
Inference costs inform architecture, sprint planning, quality trade-offs, and production budgets. If your team doesn't think in tokens, they're flying blind.
Role Fluidity
The best spec writer might be the designer. The best validator might be the domain expert. Titles matter less than context and judgment.
Cognitive Debt
Every vague prompt, unreviewed output, and skipped eval compounds faster than technical debt. It doesn't slow you down. It makes you wrong.
4x
AI-native companies grow faster than traditional SaaS
15
People at Lovable when it hit $200M ARR
100%
AI-written code at Dan Shipper's 7-figure company

How your team should operate

5 stages, 10 dimensions organized by the 6 functions of a product team. Measuring whether individual AI productivity gains compound into organizational capability.

01
Legacy
10 – 15
02
AI-Curious
16 – 21
03
AI-Enhanced
22 – 27
04
AI-First
28 – 33
05
AI-Native
34 – 40

What gets measured across the team

Strategy (PM)
01
Strategic Intelligence
Does AI inform your product strategy, or are you still prioritizing by loudest voice?
Design
02
Design & Prototyping
Can your team go from concept to interactive prototype in hours, not weeks?
Development
03
Specification & Context
Are specs structured for agents to execute, or written for humans to interpret?
04
Development & Delivery
Are agents building while engineers orchestrate?
Data
05
Customer Intelligence
Does your team synthesize customer signals with AI, or still manually tag feedback?
06
Product Analytics
Does AI surface insights proactively, or does your team stare at dashboards?
Operations
07
Quality & Experimentation
Is AI designing your experiments and validating quality, or is that still manual?
08
Team Orchestration
Are AI agents part of your team workflow, or just tools people occasionally use?
GTM & Growth (Product GTM)
09
Positioning & Messaging
Does AI inform your positioning strategy, or is messaging still updated quarterly by committee?
10
Launch & Adoption
Does AI orchestrate launches and predict adoption, or are you still following the same manual playbook?

How dimensions connect

Intelligence Layer
Strategic Intelligence + Customer Intelligence + Product Analytics
The three inputs that inform decisions.
Creation Engine
Design & Prototyping + Specification & Context + Development & Delivery
The pipeline from idea to shipped product.
Operating System
Quality & Experimentation + Team Orchestration
The governance and coordination layer.
GTM & Growth Engine
Positioning & Messaging + Launch & Adoption
How the product meets the market.

Built with its own methodology

Dacard was built using its own AI-native lifecycle. See every decision, framework application, and AI workflow documented as proof-of-practice.

Every decision documented

From specifying constraints to shipping and compounding, the build log traces every stage of the AI-native lifecycle in action. See the methodology at work, not just in theory.

Read the build log

Ready to measure?

Take the free assessment or book a call to discuss your product operations.

Free assessment. No sign-up required.