Metrics Explorer

Metrics Explorer enables you to analyze metric trends over time and drill into specific data points for detailed investigation. Use this tool to understand patterns in your development process and identify areas for improvement.

You can save your favorite Metrics Explorer views as report modules, allowing you to revisit key metrics without needing to reapply filters or display settings. Saved report modules make it easy to monitor key metrics, like Cycle Time for a particular team or over a given time period, on a consistent basis, and they can be used to build Custom Dashboards.

Access Metrics Explorer

To view, click Eng Intelligence > Metrics Explorer from the main nav.

View saved report modules

On the left side of Metrics Explorer, see a list of all saved metric modules:

Saved modules in Metrics Explorer appear on the left.

Using Metrics Explorer

Step 1: Configure a report module

  1. On the Metrics Explorer page, click the metric name in the upper left corner. By default, Cycle time is displayed. Click the metric name in the upper left.

    • A modal will appear.

  2. On the left side of the modal, select a metric. On the right side, depending on which metric you choose, you can select an operation.

  3. At the bottom of the modal, click View metric.

    • A graph of the metric is displayed. By default the data uses a time range of the last 30 days, but you can select a different time range.

    • Below the graph, see an overview of metrics that can be segmented by team, author, repository, entity, and more. Click into any of the metric points at the bottom of the page to drill in, seeing the data behind the metric.

The Metrics Explorer displays a graph, and metrics grouped by repository at the bottom.

Next, you can optionally segment the metrics and apply filters before saving the module.

Step 2: Segment and filter the metrics

You can segment the metrics by person, entity, PR, or owner, and you can filter a graph by time range, teams, author, and repository. You can also sort the columns.

Segment metrics

Segment metrics

Click the Group by dropdown below the graph to choose a different way to segment the metrics.

Group the metrics by different dimensions

The metrics segmented by team are based on the individual users within that team. In order to have data appear, the teams must have members and the team's identity mappings must be configured.

See the full list of segmentation options per category below.

Filter by time range

Filter by time range

Click the time range in the upper right corner of the graph. Select a new time range and configure the dates. The graph will automatically reload as you select a time range.

Click the time range in the upper right to open the date picker modal.

To change the grouping of the time range in the graph, click Display in the upper right corner. You can choose whether to display the data grouped by day, week, or month.

Filter by time attribute

Filter by time attribute

For version control and PR-related metrics, you can filter by approval date, close/merge date, first commit date, first review date, and open date.

Click into the time attribute filter, to the left of the date range filter:

Click the time attribute field at the top of the graph, to the left of the date range.
Filter by team, author, repository, entity type, label, and more

Filter by team, author, repository, entity type, label, and more

  1. Click Filter in the upper right corner of the graph. You can configure a single filter or a combination of filters.

  2. When you are done adding filters, click Apply at the bottom of the filter modal.

See the full list of filters per metric category and their definitions below.

Sort the columns

Sort the columns

You can sort the data below the graph. Click Sort, then select an option.

Click Sort in the upper right corner of the data table, below the graph.

Step 3: Save the report module

Once you've configured a view you'd like to revisit with a specific metric, filters, and time ranges, you can save it as a report module:

  • While viewing a module, click Save in the upper right corner of the page. Enter a name and description for the module.

    In the upper right corner, click Save to save your metric module.

Managing saved report modules

After saving, your report will appear in the module list in Metrics Explorer, where you can:

  • Add it to a Custom Dashboard

  • Reopen it at any time without reconfiguring filters

  • Rename, update metric/filter settings, and re-save as needed

  • Create a duplicate of the module: "Save As" a new module to create a copy of the settings as a new starting point

  • Share a link with other users

  • Delete when no longer needed

All saved modules and changes to existing modules will be shared across all of your Eng Intel team members to encourage transparency and collaboration on the metrics that matter to your org.

Share a report module

After selecting a data point and applying filters, you can share the browser URL with other people who have access to your Cortex workspace. The URL query parameters include timestamps, so the shared Metrics Explorer page will reflect the same results across different timezones.

Metrics available in the Metrics Explorer

Note that metrics in Metrics Explorer sync on a scheduled basis, updating every 4 hours.

Expand the tiles below to learn how the metric is calculated and the best practice for measuring success.

AI tools

AI usage metrics are pulled from GitHub Copilot.

AI tool metrics are currently only available to cloud customers in the beta program.

AI adoption rate

The percentage of licensed seats that were active users of AI coding tools in a given time period.

Calculation: Copilot active users / Copilot total seats.

Active AI users

The number of users who used AI coding tools in a given time period. If a user was active in the last 7 days, Cortex will automatically attach a user label "AI User." If a user has not used Copilot in the last 30 days, Cortex will atuomatically attach the user label "Non-AI User."

Calculation: Copilot active users.

Deployment metrics

Deploy metrics are pulled from the Cortex deploys API.

Change failure rate

The percentage of deployments that cause a failure in production.

Calculation: Number of rollbacks / number of deployments created.

Best practice: Aim to reduce your change failure rate over time. A rate below 15% aligns with DORA's elite benchmarks and indicates strong software delivery performance.

Deployment frequency

The number of deployments over a given period of time.

Best practice: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.

Rollback frequency

The number of rollbacks over a given period of time.

Best practice: While there isn't an explicit benchmark, you should aim to minimize rollback rates. A low rollback rate generally aligns with a low change failure rate.

Incident metrics

Incident metrics are pulled from PagerDuty.

Incident frequency

The number of incidents over a given period of time.

When you drill in to metric points below the graph, view data per incident:

  • Incident title

  • Status

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

Best practice: There is no universal benchmark. It is recommended to track trends and establish baselines within your organization.

Time to resolution

The amount of time it takes for an incident to be resolved.

Calculation: Incident resolution time - incident opened time.

When you drill in to metric points below the graph, view data per incident:

  • Incident title

  • Status

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

Best practice: These benchmarks may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.

Project management metrics

Project management metrics are pulled from Jira.

Story points completed

The sum of story points completed in a given time period.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Story points completed

Best practice: Establish a baseline per team, as story point values can be unique to each team. Use this metric to understand capacity trends.

Work item lead time

The time it takes from when a work item is created to when the work item is completed.

Calculation: Work item resolved date – Work item created date.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Status category

  • Work item lead time

Best practice: Lower lead times indicate a smoother process. Track trends to identify process inefficiencies and improve throughput.

Work items completed

The number of work items completed over a given period of time.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

Best practice: Review this measure alongside how many story points have been completed; this enables you to balance both quantity and effort, ensuring teams aren't favoring lower value tasks in exchange for higher numbers of items completed.

Work items created

The number of work items created over a given period of time.

When you drill in to metric points below the graph, view data per work item:

  • External key

  • Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

Best practice: Monitor this metric alongside delivery rates. If items are created faster than completed, it signals queue backlogs.

Version control metrics

Version control metrics are pulled from Azure DevOps, Bitbucket, GitHub, and GitLab.

Note that any changes that rewrite Git history (such as a rebase then a force push) can impact metric timestamps or calculations.

Cycle time

The first commit on a PR to when the PR is merged. This represents the time it takes for a single PR to go through the entire coding process.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date closed

  • Cycle time

Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).

Closed PRs

The number of PRs closed in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

Merged PRs

The number of PRs merged in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

Number of comments per PR

The number of comments on a pull request.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Number of comments per PR

Best practice: This measure indicates review depth and collaboration. A lower number may signal superficial reviews.

Number of unique PR authors

The number of unique PR authors.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

Best practice: A larger number across projects can signal distributed ownership, while a consistently low number can point to bottlenecks or team burnout.

Open PRs

The number of PRs opened in a given time period.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date closed

Best practice: Persistent backlog can indicate process inefficiencies, such as a slow review process.

PR reviews count

The number of reviews on a PR.

When you drill in to metric points below the graph, view more data:

  • PR name

  • Reviewer

  • Review date

Best practice: A higher number can indicate complex changes or low initial quality. A lower number could indicate approvals without thorough review and validation.

PR size

The number of lines of code modified in a PR.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Number of lines added

  • Number of lines deleted

  • PR size

Best practice: Smaller PRs lead to faster reviews, fewer mistakes, and increased velocity. Aim for less than 400 lines, but adjust this benchmark as needed to improve review quality and velocity.

Success rate

The percentage of PRs that are opened and eventually merged in a given time frame.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date opened

  • Date closed

Best practice: Higher success rates can indicate better quality code and reviews, but note that it is also important to understand the reasoning when a PR is rejected.

Time to approve

The time from the first review to the time it’s approved. This represents how long engineers are spending reviewing code. If the first review is an approval, this time will be 0 as the timestamps will be the same.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Review date

  • Approval date

  • Time to approve

    • This is the time between the review date and the approval date.

Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.

Time to first review

The time a PR is open to when the PR gets it’s first review (comment or approval). This represents how long PRs are waiting idol before someone starts reviewing it.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Date opened

  • First review time

  • Time to first review

    • This is the time between the open date and the first review time.

Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.

Time to merge

The time from when the PR is approved to when it’s merged.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • Approval date

  • Date closed

  • Time to merge

    • This is the time between the approval date and the date closed.

Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.

Time to open

The time it takes from the first commit on a PR until the PR is opened. This represents the time spent coding.

Note: This metric is not supported for Azure DevOps or Bitbucket.

When you drill in to metric points below the graph, view data per PR:

  • PR name

  • Author

  • PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

Best practice: There is not an explicit benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.

Metrics Explorer reference

Metric filter definitions

The available filters differ based on the category of the metric you're viewing.

Categories appear on the left side while choosing a metric in Metrics Explorer.

Expand the tiles below to learn about each of the filters available per metric category.

Version control metric filters
  • Entity type: View data only for specific types of entities (such as services or teams).

  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for PRs on entities owned by a specific individual.

    • Example: Service X is owned by Pat. Filtering by Pat will show you PRs on Service X even if the PRs were created by other users.

  • Label: View data only for PRs tagged with a specific label.

  • Repository: View data only for a specific repository.

  • Reviewer: View data for PRs that a specified person reviewed.

  • Reviewer team: View data for PRs that members of a specified team reviewed.

  • Reviewer user label: View data for PRs that users with a specified user label reviewed.

  • Status: View data for PRs in specified statuses.

  • Team: View data for PRs authored by members of a specified team.

  • Team owner: View data for PRs on entities owned by a specified team. Note that this does not include PRs on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you PRs on Service X even if the PRs were created by another team.

  • User: View data for PRs authored by a specified user.

  • User label: View data for PRs authored by users with a specific user label.

    • Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."

  • User label name: View data for PRs authored by users with a specific user label value.

Deployment metric filters
  • Environment: View data only related to a specific environment sent via the deploys API.

  • Status: View data for work items in specified statuses.

  • Entity type: View data only for specific types of entities (such as services or teams).

  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for deploys on entities owned by a specific individual.

    • Example: Service X is owned by Pat. Filtering by Pat will show you deploys on Service X even if the deploys were performed by other users.

  • Team: View data for deploys performed by members of a specified team.

  • Team owner: View data for deploys on entities owned by a specified team. Note that this does not include deploys on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you deploys on Service X even if the deploys were performed by another team.

  • User: View data for deploys performed by a specified user.

  • User label: View data for deploys performed by users with a specific user label.

  • User label name: View data for deploys performed by users with a specific user label value.

AI tool metric filters
  • Entity type: View data only for specific types of entities (such as services or teams).

  • Team: View data for AI usage by a specified team.

  • User: View data for AI usage by a specified user.

  • User label: View data for AI usage by users with a specific user label.

    • Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."

  • User label name: View data for AI usage by users with a specific user label value.

Project management metric filters

If you have configured a unique defaultJQL per entity, this is not supported in filtering or segmenting data in Metrics Explorer.

  • Component: View data only for work items assigned to a specific component.

  • Entity type: View data only for specific types of entities (such as services or teams).

  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for work items on entities owned by a specific user.

    • Example: Service X is owned by Pat. Filtering by Pat will show you work items on Service X even if the work items were created by other users.

  • Label: View data only for work items tagged with a specific label.

  • Project: View data only for work items belonging to a specific project.

  • Sprint: View data only for work items assigned to a specific sprint.

  • Status: View data for work items in specified statuses.

  • Team: View data for work items assigned to members of specified team.

  • Team owner: View data for work items on entities owned by a specific team. Note that this does not include work items on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you work items on Service X even if the work items were created by another team.

  • User: View data for work items assigned to a specified user.

  • User label: View data for work items assigned to users with a specific user label.

  • User label name: View data for work items assigned to users with a specific user label value.

  • Work item type: View data for work items of a specific type.

Incident metric filters
  • Entity type: View data only for specific types of entities (such as services or teams).

  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for incidents on entities owned by a specific user.

    • Example: Service X is owned by Pat. Filtering by Pat will show you incidents on Service X even if the incidents were created by other users.

  • Team: View data for incidents assigned to members of a specified team.

  • Team owner: View data for incidents on entities owned by a specific team. Note that this does not include incidents on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you incidents on Service X even if the incidents were created by another team.

  • User: View data for incidents assigned to a specified user.

  • User label: View data for incidents assigned to users with a specific user label.

  • User label name: View data for incidents assigned to users with a specific user label value.

Segment definitions

"Group by" segments

The segments available differ depending on which category of metric you're viewing.

Version Control

  • Person

    • Author team: The team of the author of the PR.

    • Reviewer team: The team of the reviewer of the PR.

    • Author: The individual author of the PR.

    • Reviewer: The individual reviewer of the PR.

    • Author user label: The user label associated with the PR author.

    • Reviewer user label: The user label associated with the PR reviewer.

  • Entity

    • Group: The group associated with the entity.

    • Entity: Data is segmented by individual entities.

  • Pull request

    • Repository: The repository associated with the PR.

    • Status: The status of the PR.

    • Label: The label associated with the PR.

  • Owner

    • Team owner: Team owners of the entity associated with the PR.

    • Individual owner: Individual owners of the entity associated with the PR.

Deployments

  • Person

    • Team: The team of the user who performed the deployment.

    • Deployer: The person who performed the deployment.

    • User label: The user label associated the user who performed the deployment.

  • Entity

    • Group: The group associated with the entity.

    • Entity: Data is segmented by individual entities.

  • Deployment

  • Owner

    • Team owner: Team owner of the entity associated with the deployment.

    • Individual owner: Individual owner of the entity associated with the deployment.

Incidents

  • Person

    • Team: The team of the user assigned to an incident.

    • Incident assignee: The individual assigned to an incident.

    • User label: The user label associated with the user assigned to an incident.

  • Entity

    • Group: The group associated with the entity.

    • Entity: Data is segmented by individual entities.

  • Owner

    • Team owner: Team owner of the entity associated with the incident.

    • Individual owner: Individual owner of the entity associated with the incident.

Project management (work items)

If you have configured a unique defaultJQL per entity, this is not supported in filtering or segmenting data in Metrics Explorer.

  • Person:

    • Assignee team: The team of the user assigned to the work item.

    • Assignee: The individual assigned to the work item.

    • Assignee user label: The user label associated with the work item assignee.

  • Entity

    • Group: The group associated with the entity.

    • Entity: Data is segmented by individual entities.

  • Project management

    • Project: The projects associated with the work item.

    • Sprint: The sprint associated with the work item.

  • Owner

    • Team owner: Team owners of entities associated with the work item.

    • Individual owner: Individual owners of entities associated with the work item.

AI tools

  • Person:

    • Team: The teams of the user.

    • User: The individual user.

    • User label: The user label associated with user.

Last updated

Was this helpful?