A standing product commitment. This document is public at dacard.ai/commitments/no-ranking.


The commitment

Dacard will never rank individuals. Not by score, not by contribution, not by dimension, not by any derived metric.

Scores exist to anchor coaching conversations between a team and its own practice. They do not exist to rank teams against other teams, rank people against other people, or feed performance management systems.

This is a product boundary, not a roadmap item. It will not move.


What we will never build

We will not build, ship, enable via API, or allow as a customer-configured feature:

  • Leaderboards of individuals by any Dacard-derived metric.
  • Leaderboards of teams inside a single organization where the comparison is used for performance or compensation.
  • Exports, dashboards, or integrations whose primary purpose is to enable the above.
  • Manager-facing views that surface individual-level attribution of scores to named contributors.
  • API endpoints that return individual-level scoring, contribution shares, or ranked outputs.
  • "Anonymous" rankings where the anonymity is easily reversible through small-team context.
  • Any feature whose clearest use case, on review, is stack ranking.

If a customer asks for any of the above, the answer is no. The answer stays no under commercial pressure. The answer stays no if a larger competitor ships it first.


Why

Scoring systems become what they are used for. The moment a score anchors a performance review or a bonus, the signals feeding it start getting gamed, the score stops measuring anything real, and the coaching value collapses. This is Goodhart's law, and it applies to product operations metrics with the same force it applies to everywhere else.

The unit of analysis is the practice, not the person. Cycle Time Control is a property of how a team works, not of any individual on that team. Attributing it to people is a category error. A team's product assessment score says something about the system the team operates inside, which is mostly a leadership and structure question, not an individual contribution question.

The skeptics are right about this one. Every thoughtful product ops voice (Cutler, Cagan, Perri, Torres, Doshi) has warned against vendor tools that enable Taylorist monitoring. They are right. We would rather lose the deals that demand ranking than win them and corrode the product.

The data isn't good enough to rank with even if we wanted to. Our published calibration at /science is honest about confidence intervals and dimensions where predictive power is weak. Ranking requires precision we do not have and, by design, will never claim to have.


What we build instead

  • Per-dimension profiles, not composite positions. A team sees where they are strong and where they have room, not a single number that flattens 27 dimensions into a rank.
  • Longitudinal self-comparison. Teams compare against their own past, not against other teams. The meaningful question is "are we getting better at this" not "are we better than them."
  • Cohort context, anonymized. When teams ask "is this normal for a team our size at our stage," we answer with percentile bands drawn from anonymized cohort data. We do not name the teams in the cohort. We do not rank within the band.
  • Coaching that targets systems, not people. DAC's recommendations address team-level practice, process, and structure. They never attribute performance to named individuals, and they never suggest performance actions (reviews, PIPs, promotions).

How you can hold us to this

1. It is in the terms of service. Our TOS prohibits using Dacard outputs as inputs to performance management systems. We will refuse to provide data in response to legal process that seeks this use, to the extent the law permits us to refuse.

2. The product enforces it. Our API does not return individual-level attribution. Our exports redact individual contributor data. Our dashboards aggregate to team level and above. These are not settings; they are product constraints.

3. The commitment is public. This document lives at dacard.ai/commitments/no-ranking and in the repo. If we ever change it, we change it publicly, with reasoning, and we give customers 90 days to leave with their data.

4. We publish violations. If a customer attempts to use Dacard for ranking and we become aware, we document the attempt (anonymized), describe how the product refused, and publish the pattern in our quarterly transparency note. If we ever fail and ship a feature that enables ranking, we document that too.


What this commitment does not prevent

  • Aggregated team-level benchmarking for coaching purposes.
  • Executive views of org-wide dimension profiles for strategy and resourcing decisions.
  • Cohort reports that show a team's percentile against similar teams, without naming those teams.
  • Customer-built internal tools that query our API, subject to the TOS restriction on performance management use.

The line is clear: we measure the practice of product work to help teams improve it. We do not measure people to rank them.


Signed on behalf of Dacard, 2026-04-24.* *Revisions, if any, will be logged here with date, reason, and 90-day notice to customers.

No ranking, ever | DAC Commitments | DAC