Metrics Explorer

Metrics Explorer enables you to analyze metric trends over time and drill into specific data points for detailed investigation. Use this tool to understand patterns in your development process and identify areas for improvement.

Accessing Metrics Explorer

To view, click Eng Intelligence > Metrics Explorer from the main nav.

In the main nav, click Eng Intelligence > Metrics Explorer

Using Metrics Explorer

Configure a graph

  1. On the Metrics Explorer page, click the metric name in the upper left corner. By default, Cycle time is displayed.

    • A modal will appear.

  2. On the left side of the modal, select a data point. On the right side, depending on which metric you choose, you can select an operation.

  3. At the bottom of the modal, click View metric.

A graph of the metric is displayed. By default the data uses a time range of the last 7 days, but you can select a different time range.

Below the graph, see an overview of metrics that can be segmented by team, author, repository, entity, and more.

The Metrics Explorer displays a graph, and metrics grouped by repository at the bottom.

Click into any of the metric points at the bottom of the page to drill in, seeing the data behind the metric.

Segment the metrics

Click the Group by dropdown below the graph to choose a different way to segment the metrics. \

Group the metrics by different dimensions

The metrics segmented by team are based on the individual users within that team. In order to have data appear, the teams must have members and the team's identity mappings must be configured.

You can group by:

  • Person: Author, author team, author user label, reviewer, reviewer team, reviewer user label

  • Entity: Group, entity

  • Pull Request: Repository, label

  • Owner: Team owner, individual owner

Filter the graph

You can filter a graph by time range, teams, author, and repository.

Filter by time range

Click the time range in the upper right corner of the graph. Select a new time range and configure the dates. The graph will automatically reload as you select a time range.

Click the time range in the upper right to open the date picker modal.

To change the grouping of the time range in the graph, click Display in the upper right corner. You can choose whether to display the data grouped by day, week, or month.

Filter by time attribute

For version control and PR-related metrics, you can filter by approval date, close/merge date, first commit date, first review date, and open date.

Click into the time attribute filter, to the left of the date range filter:

Click the time attribute field at the top of the graph, to the left of the date range.

Filter by team, author, repository, entity type, label, and more

  1. Click Filter in the upper right corner of the graph. You can configure a single filter or a combination of filters.

  2. When you are done adding filters, click Apply at the bottom of the filter modal.

Sort the columns

You can sort the data below the graph. Click Sort, then select an option.

Click Sort in the upper right corner of the data table, below the graph.

Share a report

After selecting a data point and applying filters, you can share the browser URL with other people who have access to your Cortex workspace. The URL query parameters include timestamps, so the shared Metrics Explorer page will reflect the same results across different timezones.

Metrics available in the Metrics Explorer

Note that metrics in Metrics Explorer sync on a scheduled basis, updating every 4 hours.

Expand the tiles below to learn how the metric is calculated and the best practice for measuring success.

Deployment metrics

Deploy metrics are pulled from the Cortex deploys API.

Change failure rate

The percentage of deployments that cause a failure in production.

Calculation: Number of rollbacks / number of deployments created.

Best practice: Aim to reduce your change failure rate over time. A rate below 15% aligns with DORA's elite benchmarks and indicates strong software delivery performance.

Deployment frequency

The number of deployments over a given period of time.

Best practice: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.

Rollback frequency

The number of rollbacks over a given period of time.

Best practice: While there isn't an explicit benchmark, you should aim to minimize rollback rates. A low rollback rate generally aligns with a low change failure rate.

Incident metrics

Incident metrics are pulled from PagerDuty.

Incident frequency

The number of incidents over a given period of time.

When you drill in to metric points below the graph, view data per incident:

  • Incident title

  • Status

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

Best practice: There is no universal benchmark. It is recommended to track trends and establish baselines within your organization.

Time to resolution

The amount of time it takes for an incident to be resolved.

Calculation: Incident resolution time - incident opened time.

When you drill in to metric points below the graph, view data per incident:

  • Incident title

  • Status

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

Best practice: These benchmarks may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.

Project management metrics

Project management metrics are pulled from Jira.

Story points completed

The sum of story points completed in a given time period.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Story points completed

Best practice: Establish a baseline per team, as story point values can be unique to each team. Use this metric to understand capacity trends.

Work item lead time

The time it takes from when a work item is created to when the work item is completed.

Calculation: Work item resolved date – Work item created date.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Status category

  • Work item lead time

Best practice: Lower lead times indicate a smoother process. Track trends to identify process inefficiencies and improve throughput.

Work items completed

The number of work items completed over a given period of time.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

Best practice: Review this measure alongside how many story points have been completed; this enables you to balance both quantity and effort, ensuring teams aren't favoring lower value tasks in exchange for higher numbers of items completed.

Work items created

The number of work items created over a given period of time.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

Best practice: Monitor this metric alongside delivery rates. If items are created faster than completed, it signals queue backlogs.

Version control metrics

Version control metrics are pulled from Azure DevOps, Bitbucket, GitHub, and GitLab.

Note that any changes that rewrite Git history (such as a rebase then a force push) can impact metric timestamps or calculations.

Cycle time

The first commit on a PR to when the PR is merged. This represents the time it takes for a single PR to go through the entire coding process.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date closed

  • Cycle time

Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).

Closed PRs

The number of PRs closed in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

Merged PRs

The number of PRs merged in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

Number of comments per PR

The number of comments on a pull request.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Number of comments per PR

Best practice: This measure indicates review depth and collaboration. A lower number may signal superficial reviews.

Number of unique PR authors

The number of unique PR authors.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

Best practice: A larger number across projects can signal distributed ownership, while a consistently low number can point to bottlenecks or team burnout.

Open PRs

The number of PRs opened in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: Persistent backlog can indicate process inefficiencies, such as a slow review process.

PR reviews count

The number of reviews on a PR.

When you drill in to metric points below the graph, view more data:

  • PR name

  • Reviewer

  • Review date

Best practice: A higher number can indicate complex changes or low initial quality. A lower number could indicate approvals without thorough review and validation.

PR size

The number of lines of code modified in a PR.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Number of lines added

  • Number of lines deleted

  • PR size

Best practice: Smaller PRs lead to faster reviews, fewer mistakes, and increased velocity. Aim for less than 400 lines, but adjust this benchmark as needed to improve review quality and velocity.

Success rate

The percentage of PRs that are opened and eventually merged in a given time frame.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date opened

  • Date closed

Best practice: Higher success rates can indicate better quality code and reviews, but note that it is also important to understand the reasoning when a PR is rejected.

Time to approve

The time from the first review to the time it’s approved. This represents how long engineers are spending reviewing code. If the first review is an approval, this time will be 0 as the timestamps will be the same.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Review date

  • Approval date

  • Time to approve

    • This is the time between the review date and the approval date.

Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.

Time to first review

The time a PR is open to when the PR gets it’s first review (comment or approval). This represents how long PRs are waiting idol before someone starts reviewing it.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date opened

  • First review time

  • Time to first review

    • This is the time between the open date and the first review time.

Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.

Time to merge

The time from when the PR is approved to when it’s merged.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Approval date

  • Date closed

  • Time to merge

    • This is the time between the approval date and the date closed.

Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.

Time to open

The time it takes from the first commit on a PR until the PR is opened. This represents the time spent coding.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

Best practice: There is not an explicit benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.

Last updated

Was this helpful?