Start with the thinking, not the code
When AI can generate anything, the differentiator is knowing what to generate. Why the first thing built wasn't code but the scoring frameworks.
The instinct to ship
When you have 100 hours and a tool that can generate code at the speed of thought, the natural instinct is to start building immediately. Spin up the Next.js project. Scaffold the database. Get something on screen. Every product instinct screams to ship fast and iterate.
I did the opposite. The first several hours produced no code at all. Instead, they produced two structured frameworks (at the time), each expressed as detailed JSON with scoring dimensions, maturity stages, signals, anti-patterns, and transition triggers. The AI-Native SaaS Maturity Framework. The Product Development Lifecycle. Two models at the time, representing twenty years of accumulated thinking about how product teams work, fail, and improve.
This was not procrastination. It was the most important architectural decision of the entire project.
Content-first development
There is a pattern I have seen across every successful product I have built or advised. The products that scale well are the ones where the intellectual model was clear before the first line of code was written. The products that struggle are the ones where the team started building before they understood what they were building for.
AI tools amplify this pattern dramatically. When AI can generate a complete feature in minutes, the quality of the output depends almost entirely on the quality of the input. And the highest-quality input is not a feature spec or a user story. It is a well-structured model of the problem domain.
The two frameworks served as the foundation for everything that followed. The marketing site content came directly from framework descriptions. The scoring engine evaluated products against framework dimensions. The diagnostic questions mapped to framework categories. The investor thesis was grounded in framework intellectual property. The MCP server exposed framework data to AI assistants. One set of structured models fed every surface of the product.
Two frameworks, one system
The maturity framework evaluates the product and its team across 27 dimensions organized by the six functions of a modern product team: Strategy, Design, Development, Operations, GTM, and Intelligence. Each dimension is scored 1 through 5, producing a total score from 27 to 135 that maps to five maturity stages from Foundation to Compounding.
The lifecycle framework maps the six stages of building AI-native products, from specification through compound learning, with 36 tasks and an operations stack of eight categories. It represents how the process should work when AI is integrated at every stage.
Together, these two models answer the questions that every product leader is asking: "How AI-native is our product and team?" And "How AI-native is our build process?" The intersection of both is where real transformation happens.
Why structured data matters
Each framework was expressed as structured JSON, not as a blog post or a slide deck. This was deliberate. Structured data is machine-readable, which means it can feed any surface: a marketing page, a scoring algorithm, an API response, an AI assistant's context window, an investor presentation.
The maturity framework JSON file contains over 500 lines of structured data: stage definitions with score ranges, dimension descriptions at each maturity level, observable signals organized by product, technical, and business categories, anti-patterns with descriptions, and transition triggers. This single file powers the entire scoring experience.
The compounding effect
The decision to start with frameworks paid compounding dividends throughout the entire 100 hours. Every new feature had a clear foundation to build on. There were no debates about taxonomy, no inconsistent terminology, no features that contradicted the core model.
The frameworks also created a natural language for communicating with Claude Code. Instead of describing features in abstract terms, I could reference framework concepts. The shared vocabulary made every interaction more precise and every output more coherent.
What experience provides that AI cannot
The frameworks themselves could not have been generated by AI. I have seen people try. You can prompt a language model to create a maturity framework, and it will produce something that looks reasonable on the surface. But it will be generic. It will miss the non-obvious dimensions. It will not capture the patterns you only learn from watching dozens of teams navigate this transition in the real world.
The 27 dimensions of the maturity framework were not chosen because they sounded comprehensive. They were chosen because, across eight industries and twenty years of product work, these are the dimensions that actually predict whether an AI transformation will succeed or stall.
> When AI can generate anything, the differentiator is knowing what to generate. Start with the thinking, not the code.
Darren Card
Founder, Dacard.ai
See your diagnostic
Free. No sign-up required. Results in 2 minutes.