# DORA Dashboard

The DORA framework (DevOps Research and Assessment) is a set of metrics that help teams measure software delivery performance. Integrated into your Engineering Operations Platform, DORA metrics give engineering teams clear insights into how fast, reliable, and efficient their development practices are. This empowers teams to track progress, identify bottlenecks, and drive continuous improvement — all within the same place they manage services and deploy code. Read more about these metrics in the [dora.dev guide](https://dora.dev/guides/dora-metrics-four-keys/).

Use the DORA Dashboard in Cortex to evaluate the speed and stability of your software delivery process. Visualize your team’s success across the software process and get a clear view of cycle time, deployment frequency, change failure rate, and time to resolution.

<figure><img src="https://826863033-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJW7pYRxS4dHS3Hv6wxve%2Fuploads%2Fgit-blob-edd36af8d5d09775f68e7b2d446fa8da22defde5%2Fdora-dashboard.jpg?alt=media" alt="The DORA dashboard shows graphs for the key DORA metrics."><figcaption></figcaption></figure>

{% hint style="success" %}
Looking for additional resources on enforcing DORA standards in Cortex?

* See [Solutions: DORA Metrics](https://app.gitbook.com/s/7F1UMLUuX7dkA693DijO/dora) for guidance on using Cortex features to improve your DORA metrics.
* Check out the [Cortex Academy "Operationalizing DORA Metrics" course](https://academy.cortex.io/courses/cortex-solutions-operationalizing-dora-metrics), available to all Cortex customers and POVs.
* See Cortex's "Operationalizing DORA Metrics" webinar with Google Cloud's DORA program leader, Nathen Harvey, and learn about the key takeaways [in our blog](https://www.cortex.io/post/from-insight-to-impact-key-takeaways-from-our-dora-webinar-with-nathen-harvey).
  {% endhint %}

## Using the DORA Dashboard

To view the dashboard, click **Eng Intelligence > DORA Dashboard** in the main nav.

### Adjust the time range

By default, the dashboard displays data from the last last month. To change the date range, click Last month and select a new date range.

### Apply a filter option

For each chart, you can apply filters for group, owner, repository, user label, and more. To select filters, click **Filter** in the upper right corner of a graph.

## Metric visualizations

You can select a different operation for the Cycle time and Time to resolution graphs. Options include average, sum, max, min, P95, and median. Both graphs display the average by default.

### Cycle time

Cycle time represents the time it takes for a single PR to go through the entire coding process. Shorter cycle times indicate a more agile team that is able to quickly respond to changing needs.

**Calculation**: The time between the first commit on a PR to when the PR is merged.

<div align="left"><figure><img src="https://826863033-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJW7pYRxS4dHS3Hv6wxve%2Fuploads%2Fgit-blob-16f4e881fd1b87af44de567a5e1245311fbe46de%2Fcycle-time.jpg?alt=media" alt="Cycle time represents the time it takes for a single PR to go through the entire coding process."><figcaption></figcaption></figure></div>

**Best practice**: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).

Note: This metric is not supported for [Azure DevOps](https://docs.cortex.io/ingesting-data-into-cortex/integrations/azuredevops) or [Bitbucket](https://docs.cortex.io/ingesting-data-into-cortex/integrations/bitbucket).

### Deployment frequency

Deployment frequency measures how often your team successfully releases code to production, serving as a key indicator of your delivery velocity and operational maturity. This DORA metric reflects your team's ability to deliver value continuously and respond quickly to market demands.

**Calculation**: The number of deployments over a given period of time.

<div align="left"><figure><img src="https://826863033-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJW7pYRxS4dHS3Hv6wxve%2Fuploads%2Fgit-blob-2d8106c50a62ff98250fca9b3a1112e58c7def4b%2Fdeployment-frequency.jpg?alt=media" alt="Deployment frequency represents how often code is deployed to a production environment"><figcaption></figcaption></figure></div>

**Best practice**: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.

### Change failure rate

Change failure rate measures the percentage of deployments that result in production failures, serving as a critical indicator of deployment stability and code quality. This DORA metric reveals how often your releases introduce bugs, outages, or performance issues that impact users.

**Calculation**: `Number of rollbacks / number of deployments created`.

<div align="left"><figure><img src="https://826863033-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJW7pYRxS4dHS3Hv6wxve%2Fuploads%2Fgit-blob-1dc8701b8f0dfab799dd3f22a76d5f0cf94e8e89%2Fdora-change-failure.jpg?alt=media" alt="The change failure rate graph shows the percentage of deployments that result in a failure in production."><figcaption></figcaption></figure></div>

**Best practice**: Aim for a failure rate less than 15%.

### Time to resolution (MTTR)

Time to resolution measures how quickly your team recovers from production failures, reflecting your incident response capabilities and system resilience. Also known as mean time to recovery (MTTR), this metric indicates how well-prepared your team is to handle inevitable production issues.

**Calculation**: `Incident resolution time - incident opened time`.

<div align="left"><figure><img src="https://826863033-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJW7pYRxS4dHS3Hv6wxve%2Fuploads%2Fgit-blob-c1c3b1b2480816f2c86d94100445b4e6ddf23a62%2Fdora-mttr.jpg?alt=media" alt="The time to resolution graph shows how long it takes a team to recover from failure in production."><figcaption></figcaption></figure></div>

**Best practice**: This benchmark may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortex.io/improve/eng-intelligence/dashboards/dora-dashboard.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
