73 files and the judgment that wrote them
A production Next.js application in days. Here's what twenty years of product experience actually did when paired with AI execution speed.
The scope
Seventy-three TypeScript files. A Next.js 14 application with the app router, server components, API routes, and middleware. AI-driven product scoring via the Claude API. User authentication through Clerk. Subscription billing through Stripe. A full monorepo with three packages, a docs site, and an MCP server. All of it production-quality, all of it coherent, all of it shipped in the span of days.
When people hear this, the first reaction is usually disbelief. The second is a question: "But is it actually good?" The answer matters because speed without quality is just technical debt. What I want to document in this piece is not the raw output count, but the architectural decisions that made the output hold together.
- 73 TypeScript Files: Full-stack Next.js 14 application, three packages, production-ready
- 4 Step Onboarding: Role, company, product, completion, with progress persistence
- 9 Settings Modules: Profile to privacy, fully configurable per plan tier
- 6 RBAC Roles: Owner through Viewer, with 30 discrete permissions
The stack choices
Every technology choice in a product sprint is also a bet. You are betting that the tool's abstractions will hold, that its documentation is accurate, that its edge cases will not surface at the worst possible moment. With twenty years of making these bets, the stack selection here was deliberate from the start.
Next.js 14 with the app router was non-negotiable. Server components solve real problems for a data-dense application: they eliminate the waterfall fetches that make dashboards feel slow, they keep sensitive API keys out of the client bundle, and they let you co-locate data loading with the component that needs it. The app router's layout system also made nested route groups practical, which matters when you have settings pages that share chrome with product pages but need different authentication boundaries.
Turso SQLite was the database choice, and it deserves more attention than it typically gets in technical stack discussions. SQLite is not a toy. It is the most widely deployed database engine in the world, and the Turso edge deployment model means you get the read performance of a local database with the durability of a managed service. For a single-tenant scoring product, it is the right call. No ORM overhead. No connection pooling complexity. Libsql driver with direct SQL.
- Next.js 14 App Router: Server components, route groups, layout-level auth boundaries, streaming responses
- Clerk Authentication: Middleware-level route protection, org/user session management, webhook sync to Turso
- Turso SQLite: Edge-deployed, libsql driver, direct SQL with typed query results, no ORM
- Claude API Scorer: Async job pattern, status polling, structured JSON output, provider abstraction layer
- Stripe Billing: Checkout sessions, webhook processing, subscription lifecycle, credit system enforcement
- Vercel Deploy: Edge functions, cron jobs for integration sync, environment per branch, instant rollback
The collaboration model
Working with Claude Code on a full-stack application is nothing like pair programming and nothing like using a code completion tool. It is closer to working with a very fast, very knowledgeable architect who has read every piece of documentation ever written but has never shipped a product to real users.
The human provides three things that the AI cannot: architectural vision, pattern selection, and quality judgment. The AI provides two things that the human cannot match: execution speed and completeness. When these capabilities combine correctly, the output is qualitatively different from what either could produce alone. When they combine incorrectly, you get technically correct code that is architecturally incoherent.
The discipline is knowing which role to play in each moment. When Claude Code is generating a settings page or wiring up a Stripe webhook handler, the right move is to review and redirect. When it is making choices about package boundaries or async patterns, the right move is to intervene before the pattern propagates.
AI-Generated Default: What Claude Code reached for
- Synchronous scoring endpoint with immediate response
- Inline database queries in page components
- Single package with no boundary enforcement
- Environment variables accessed directly in components
- One-size-fits-all error handling at the route level
Human-Directed Architecture: What experience demanded instead
- Async job pattern with status polling and resumable state
- Data access layer in core package, components stay clean
- Three-package monorepo with explicit import boundaries
- Server-only config module, zero secrets in client bundles
- Typed error codes, Sentry capture, graceful degradation
Context engineering
The most important technical skill for this kind of development is not prompt engineering. It is context engineering. I maintained a persistent memory file, MEMORY.md, that Claude Code loaded at the start of every session. Over the 100 hours, this file grew to over 200 lines of compressed institutional knowledge: package boundaries, established patterns, decisions already made and why, things not to do and the reasons.
The distinction between prompt engineering and context engineering matters. Prompt engineering is optimizing a single instruction. Context engineering is building a persistent knowledge base that shapes every interaction in a session. One is tactical. The other is architectural.
The MEMORY.md file tracked three categories. First, architectural decisions with their rationale, so those decisions did not get relitigated in every session. Second, established patterns with code-level examples, so new code was consistent with existing code. Third, things that had been tried and rejected, with the reasons, so the same wrong path was not explored twice. This last category is underrated. AI tools have no persistent memory across sessions, which means they will happily suggest the same wrong approach repeatedly unless you document the rejection explicitly.
When I redirected
The collaboration was not all smooth execution. There were patterns where Claude Code's initial output was technically correct but architecturally wrong for this product. The scoring API is the clearest example.
The initial implementation created a synchronous endpoint: call the API, wait for Claude to evaluate all 27 dimensions, return the result. This works at low load and with fast models. It breaks in three specific ways at scale: timeouts on slow model responses, no ability to resume a partial evaluation, and no way to show the user progress during a long-running operation. None of these failure modes are obvious from a requirements document. They are patterns you recognize from having watched similar systems fail.
The redirect was to an async job architecture: submit the scoring request, get back a job ID, poll for status, stream results as dimensions complete. More complex to build, dramatically more robust in production. This is the kind of decision that looks like over-engineering until the day it saves you from a complete user-facing failure.
Other redirects followed the same pattern. The initial multi-tenant data model co-located account and user data in ways that would have made row-level security difficult later. The initial integration sync was a single blocking call that would have timed out on large accounts. The initial settings architecture duplicated configuration logic across routes instead of centralizing it in a shared module. None of these were wrong in isolation. All of them would have created significant rework at scale.
The monorepo boundary discipline
One of the most consequential early decisions was the monorepo structure with enforced package boundaries. The three packages serve distinct purposes. `packages/shared` is pure TypeScript with zero server dependencies: framework definitions, scoring models, plans, RBAC configuration. `packages/core` is server business logic: the database layer, integration adapters, the scorer, email. `apps/web` is the Next.js application: pages, components, API routes.
The boundary that matters most is the one between client components and `@dacard/core`. Core pulls in native binary dependencies for database access. If a client component imports from core, webpack tries to bundle those native binaries for the browser and the build fails with cryptic errors. The rule is absolute: client components never import from core. Server components and API routes can import from both.
Enforcing this boundary required redirecting Claude Code multiple times in the first two days. The natural tendency is to import what you need from wherever it lives. The discipline of thinking in package boundaries is a human-trained instinct. Once the pattern was established and documented in MEMORY.md, subsequent code generation respected it consistently.
What this means
The 73 TypeScript files are not impressive because they were generated quickly. They are impressive because they represent a coherent, scalable architecture that reflects twenty years of hard-won product engineering judgment. Speed was the tool. Judgment was the product.
Every team that adopts AI-assisted development will eventually face the same question: is the output actually good, or does it just look good? The answer depends almost entirely on whether the humans in the loop understand the difference between technically correct and architecturally sound. That distinction is not something you can prompt your way into knowing. It is the product of experience.
> Experience is not replaced by AI. It becomes the quality function. AI amplifies your judgment, and judgment is the product of years, not prompts.
Darren Card
Founder, Dacard.ai
See your diagnostic
Free. No sign-up required. Results in 2 minutes.