DORA & Accelerate Principles
Manual chapter for measuring and improving software delivery performance with DORA.
Source: content/manual/01-dora-accelerate/index.md
DORA metrics give leaders a common language for delivery performance. They stay actionable because each metric has a crisp definition, a trusted data pipeline, and a playbook that improves outcomes. Treat this chapter as the north star for deciding what to instrument next and which improvement program deserves investment.
Why leadership should care
- Customer value delivery: Frequent, low-risk releases keep value flowing and tighten discovery loops.
- Engineer experience: Shared dashboards end arguments about “velocity” and point directly at system constraints.
- Executive alignment: DORA metrics make investment cases obvious—slow lead time points at platform gaps, high failure rate highlights quality debt.
Metrics at a glance
| Metric | Definition | Direction of improvement | Fastest diagnostic |
|---|---|---|---|
| Deployment frequency | How often production receives value | Higher | Release calendar annotated with change type |
| Lead time for changes | Commit to running in production | Lower | Value stream map of tooling and approvals |
| Change failure rate | % of releases that cause incidents | Lower | Incident register tied to deploy IDs |
| Mean time to restore (MTTR) | Duration from incident start to resolution | Lower | Pager duty timelines, observability alert history |
Implementation roadmap
- Agree on definitions and scope. Decide what counts as a deploy, which environments matter, and how incidents are classified. Publish the contract in your delivery handbook or internal wiki.
- Instrument the value stream. Configure SCM, CI/CD, and incident tooling to emit structured events (webhooks, APIs, data exports). Prefer automation—manual spreadsheets erode trust quickly.
- Automate rollups. Use pipelines (GitHub Actions, Dagster, dbt, or your BI stack) to aggregate metrics nightly, tag services and teams, and surface anomalies.
- Expose shared dashboards. Grafana, Looker, or even Google Sheets are fine—what matters is a single source of truth with time-series trends and team filters.
- Run regular reviews. Incorporate metrics into engineering ops reviews, executive updates, and team retros. Without cadence, metrics become shelfware.
Pair this roadmap with playbooks/measure-dora-metrics/checklist.md for a tactical task list.
Guardrails and anti-patterns
- Measure in aggregate; do not weaponize individuals. The four metrics describe system health, not personal performance.
- Avoid vanity targets (“four deploys per day”) unless paired with clear hypotheses and guardrails.
- Resist redefining metrics when numbers look bad—fix root causes instead.
- Beware noisy data: reconcile incidents without deploy links, normalize commit emails, and log schema changes that affect calculations.
Choosing the right playbook
| Signal | Leading diagnosis | Recommended playbook |
|---|---|---|
| Lead time trending upward | Large batch sizes, manual approvals | playbooks/shortening-lead-time/index.md |
| Deployment frequency stalled | Bottlenecked pipelines, lack of self-service | playbooks/improving-deployment-frequency/index.md |
| Change failure rate spiking | Weak automated testing, brittle rollbacks | playbooks/reducing-change-failure-rate/index.md |
| MTTR above target | Slow detection, unclear runbooks | playbooks/accelerating-mttr/index.md |
| No trustworthy metrics | Tooling gaps, unclear definitions | playbooks/measure-dora-metrics/index.md |
Each playbook includes a matching checklist to keep remediation measurable.
Operating cadence checklist
- Weekly: Review dashboard, capture improvement experiments, and annotate anomalies.
- Monthly: Pair DORA trends with qualitative DevEx findings (see
playbooks/measuring-devex/index.md). - Quarterly: Validate definitions, audit data quality, and recalibrate targets based on business outcomes.
Tooling and templates
playbooks/measure-dora-metrics/checklist.md— end-to-end instrumentation tasks.playbooks/improving-deployment-frequency/checklist.md&playbooks/shortening-lead-time/checklist.md— cadence improvements.exports/dashboards — example automation pipelines that showcase how instrumentation rolls up into decision-ready views; your platform team can grant access or adapt them to your tooling.- Maintain a shared delivery glossary (handbook, wiki) for canonical terminology.
Reading list
- “Accelerate” by Forsgren, Humble, and Kim (primary research study).
- Latest DORA State of DevOps report for industry benchmarks.
- Charity Majors on MTTR and ownership for cultural framing.
Related assets
- Glossary entries for health gates and release train add shared vocabulary.
notes/2025-10-05-observability-gotchas.mdfeeds into MTTR remediation plans.
Deep dive chapters
- Definitions & Data Contracts
- Instrumentation Pipeline
- Dashboards & Review Cadence
- Improvement Experiments
- Anti-Patterns & Guardrails
- Service Catalog Integration
