To view, click Eng Intelligence >All Metrics in the main nav:
Review trends in Eng Intelligence and use that knowledge to inform your Scorecards. While viewing All Metrics, in the upper right corner of the page click Create Scorecard. You will be redirected to a configurable Scorecard template that measures performance, activity, and flow metrics that impact productivity.
Using All Metrics
The All Metrics view aggregates data from your connected entities to calculate critical metrics based on your organization's priorities. The data is presented by team, group, or individual, and can be filtered by time range. Cortex provides a set of default metrics, but you can also create custom metrics to track here.
These values are recalculated every hour. For count metrics (e.g., PRs opened) , 0 is displayed if no data is available. For average metrics (e.g., average PR open to close time), N/A is displayed if no data is available to calculate averages.
Apply time range and team filters
By default, All Metrics displays data from the last 7 days.
To filter by time range:
In the upper right corner of Eng Intelligence, click Last 7 days, then select a new time range for your metrics display:
To filter by team, group, or owner:
Click Filter in the upper right corner.
Click into Group, Owner, or Team, and select filter options.
Click Apply.
Group by team hierarchy
By default, each Team entity in Cortex is displayed in its own dedicated row. To group by the team hierarchies you've created, click View as hierarchy.
Group by entity type
By default, All Metrics displays Team data. In the upper left corner, click the Team dropdown to select a different entity type:
Click the Group by dropdown and select a label you want to group by. The grouping will be added as a row to the metrics table, along with separate rows for each member of the grouping.
View more details for an entity
To better understand the data behind a trend you see, click an entity to open a side panel with more information:
Under the Related activity tab, see available metrics and recent activity.
Under the Trends tab, see a historical performance graph for each metric.
In the upper right corner of the panel, you can adjust the time range for the graphs to be anywhere between the last 7 days and 6 months. This will update the graph view and maps to the table, so all metrics will reflect the new timeframe.
Show Scorecard view
In the upper right corner, click Display. In this drop-down, you can choose whether to display entities in their associated hierarchies and you can select a Scorecard.
When you select a Scorecard, Scorecard performance is overlayed in Eng Intelligence when grouped by team or service. This view is not available when grouping by group, user, or owner.
The icon representing the Scorecard level achieved by each entity will appear next to the entity name:
Metrics
Users with the Configure custom metrics permission can create custom metrics for All Metrics, or you can use the built-in metrics listed below, including deploys, git (Azure DevOps, Bitbucket, GitHub, and GitLab), Jira, and PagerDuty.
Deploy metrics
Avg deploys/week
Calculates the average number of deploys per week over the selected time range.
Calculates the average time to close pull requests for each PR opened and merged during the selected time range.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
This metric provides insight into how long it takes to merge something, such as build time, reviews, conversations, fixing linter issues, etc.
If your Average PR open to close time is high, it’s worth investigating to identify the part of the development cycle that contribute the most to this time.
Average PR open to close time is related to other metrics, such as time to review and bottlenecks in average PRs reviewed each week. The key here is to examine the time and quantity of a particular activity.
Note that if some teams are using draft pull requests, their numbers may be higher.
Avg time to first review
Determines average time from first open to first review of a pull request for any PR that has been opened during the selected time range.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
For a subset of pull requests, this metric can provide insight into potential inefficiencies. For high figures, investigate whether this is due to the software process or roadblocks faced by team members.
Note that if some teams are using draft pull requests, their numbers may be higher.
Avg time to approval
Displays average time from when a pull request was first opened to when it was first approved for any PR opened during the selected time range.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
Average time to approval can capture review-related bottlenecks in the PR cycle. When this figure is high, there may be opportunities to improve processes and PR sizes.
Note that if some teams are using draft pull requests, their numbers may be higher.
PRs opened
Displays a count of pull requests opened during the selected time range.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
Pull requests opened is particularly useful as a throughput metric. When reviewing this data, consider the expected minimum activity for a developer.
On an individual level, evaluate how much time a team member spends building features versus supporting others. You can also assess how much time a team is spending shipping code versus other teams.
Note that while this metric provides useful insight, weekly PRs merged may be a more meaningful figure.
Weekly PRs merged
Calculates how many pull requests were opened each week, averaged across the weeks in the selected time range, to determine how many PRs were opened and merged each week.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
This throughput metric provides insight into how many things make it to the default branch and are closed out.
In theory, this figure should match the trend for Average PR open to close time, since you don’t want too many pull requests kept open.
Avg PRs reviewed/week
Calculates the number of pull requests that were reviewed each week, averaged across the selected time frame.
Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.
This metric helps users understand bottlenecks in the review stage due to load balancing work, education gaps, onboarding, career progression, and domain mastery.
Note that this figure has been deduplicated on a per user basis, so if a user reviews a pull request multiple times, it will only display once within Eng Intelligence.
You may be spending too much time in the review stage if this figure is high, but you have a low number of commits and a low number of merged pull requests. If this is the case, other parts of the PR lifecycle may be at risk.
Avg commits per PR
Displays the number of commits required from PR open to close, and averaged across all PRs, for any PR opened and merged during the selected time range.
Pulls data from Azure DevOps, GitHub, and GitLab.
This metric provides insight into activity trends by team members, as greater activity indicates more engagement.
Average commits per PR can be helpful during the onboarding process, so you can gauge how long it takes for a developer to reach the team’s baseline for activity.
Note that if some teams are using draft pull requests, their numbers may be higher.
Avg LOC changed per PR
Displays the average number of lines added plus lines deleted for pull requests that were opened/merged during the selected timeframe.
Pulls data from GitHub and GitLab. This metric is not supported for Azure DevOps.
This metric can provide information about pull request size. Ideally, developers should open consumable PRs that are easy to review, and thus are easy to push into production.
This figure can impact other metrics related to the PR cycle.
Jira metrics
Issues completed
The number of issues completed in a given time period.
Calculation
Assume that you have a selected time period of 1/1/2024 - 2/1/2024.
There are 4 Jira tickets with varying resolution dates:
Ticket 1: 1/5/2024, Entity 1
Ticket 2: 12/1/2023, Entity 1
Ticket 3: 1/15/2024, Entity 2
Ticket 4: NULL, Entity 2
Entity 1 has 1 ticket completed during the timeframe (Ticket 1).
Entity 2 has 1 ticket completed during the timeframe (Ticket 3).
Story points completed
The number of story points completed in a given time period.
Calculation
Assume that you have a selected time period of 1/1/2024 - 2/1/2024.
There are 5 Jira tickets with varying resolution dates and story points:
Ticket 1: 1/5/2024, 3 points, Entity 1
Ticket 2: 1/17/2024, null (0) points, Entity 1
Ticket 3: 12/1/2023, 5 points, Entity 1
This ticket does not fall within the selected time period.
Ticket 4: 1/15/2024, 8 points, Entity 2
Ticket 5: Null, 2 points, Entity 2
This ticket does not fall within the selected time period.
Entity 1 has 3 points. Entity 2 has 8 points.
Average days to complete
The average time it takes, in days, to complete an issue in a given time period.
Calculation
Assume that you have a selected time period of 1/1/2024 - 2/1/2024.
There are 5 Jira tickets with varying resolution dates. For each ticket, the day count is based on (Resolved Date) - (Created Date).
Ticket 1: 1/5/2024, 3 days, Entity 1
Ticket 2: 1/10/2024, 5 days, Entity 1
Ticket 3: 12/1/2023, 5 days, Entity 1
Ticket 4, 1/15/2024, 8 days, Entity 2
Ticket 5: Null, 2 days, Entity 2
Entity 1: (3 days + 5 days) / 2 = 4
Entity 2: 8 days / 1 = 8
% of sprint completed
The count of completed tickets in any active sprint as a percentage of the total count of tickets in any active sprint for a given time period.
Calculation
Assume that you have selected a time period of 1/1/2024 - 2/1/2024.
There are 4 sprints with varying start and end dates:
Sprint 1: 12/01/2023 to 12/15/2023 (not active)
Sprint 2: 12/15/2023 to 1/1/2024 (active)
Sprint 3: 1/1/2024 to 1/15/2024 (active)
Sprint 4: 2/1/2024 to 2/15/2024 (active)
There are 5 Jira tickets aligned with varying sprints, with varying resolution dates:
Ticket 1: 12/14/2024, Sprint 1, Entity 1
Excluded because of inactive sprint
Ticket 2: 2/15/2023, Sprint 4, Entity 1
Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.
Ticket 3: 1/14/2023, Sprint 3, Entity 1
Resolution date is within selected timeframe and sprint is active. This counts as a resolved ticket and toward the total number of tickets.
Ticket 4: 12/17/2023, Sprint 2, Entity 2
Resolution date occurred before the timeframe, and in active sprint. This counts as a resolved ticket and toward the total number of tickets.
Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.
To calculate the metric, we look at the # tickets resolved before the end of the sprint AND the end of the evaluation window, divided by total # tickets during the selected timeframe: Entity 1: 1 resolved ticket / 2 total = 50%
Entity 2: 1 resolved tickets / 2 total = 50%
Issues completed (custom grouping)
The number of issues completed in a given time period for a customized grouping of issues.
The issue grouping is customizable, and can be made up of a combination of label, component, and issue types. The label, component, or issue type you specify for a grouping must also exist in Jira.
For example, you could configure a “Project A Bugs” grouping in Cortex that maps to a combination of Issue type: Bug and Component: Project A. The “Project A Bugs” grouping would become a column in the table, and Eng Intelligence will display the number of tickets closed for the selected time period matching the configuration of Issue type: Bug and Component: Project A.
For Grouping 1, there was 1 ticket within the time period (Ticket 1).
For Grouping 2, there was 1 ticket within the time period (Ticket 3).
Entity 2:
For Grouping 1, there were 0 tickets within the time period.
For Grouping 2, there was 1 ticket within the time period (Ticket 4)
PagerDuty metrics
Mean time to resolve incidents
Calculates average amount of time from incident open to resolution in the selected time range.
Pulls data from PagerDuty.
Incidents opened
Displays sum of incidents opened for that time range; based on the most recently assigned user/team for each incident.
Pulls data from PagerDuty.
Incidents opened/week
Displays sum of of incidents opened, divided by the number of weeks in the selected time range; based on the most recently assigned user/team for each incident.
Pulls data from PagerDuty.
All Metrics settings
Change All Metrics appearance
From the Eng Intelligence tab of Appearance settings, users with the Configure Eng Intelligence permission can also choose which columns to display and adjust the order of columns in the All Metrics view.
Set filtering for metric calculation
Under Settings > Eng Intelligence, in the Filters tab, users with the Configure Eng Intelligence permission can set filters for some pre-defined metrics:
Under Deploys, select the deploy environments you want to include in the calculation of deploy frequency and deploy failure rate. - If none are selected, all deploys will be included.
Under Pull requests, select the authors you want to exclude from the calculation of PR-related metrics.
If none are selected, PRs from all authors will be included.
By default, Cortex filters out pull requests opened by bots in GitHub but does not do this automatically for GitLab.
Create and manage user labels for grouping
User labels in Eng Intelligence allow you to group users into cohorts to analyze metrics based on different factors. This can be useful for benchmarking one engineer’s metrics against the average within a cohort, comparing metrics between engineers who use different tools to complete their work, and understanding metrics by different variables: location (e.g., in office or remote), engineer level (staff vs. lead engineer), tech stack (frontend vs. backend), and more.
Users who have the Configure user labels permission can create and apply labels.
The instructions below describe how to use this feature in the Cortex UI. See the Cortex API documentation for instructions on creating and managing user labels programmatically.
Click your avatar in the lower left then click Settings.
Under Eng Intelligence, click User labeling.
In the upper right corner, click Create label.
Fill out the “Create label” form:
Name: Enter a descriptive name, e.g., Location.
Description: Optionally enter a description, such as "This label helps us understand metrics by location."
Values: Enter possible values for the label, e.g., New York, California, Remote.
Click Create label.
After saving, the label will appear under the Label management tab in the Eng Intelligence settings page.
View applied user labels
In the Eng Intelligence settings page under the User labeling tab, you can view a list of users and their applied labels. Note that these labels are only displayed in Eng Intelligence, and not in other pages within Cortex.
Check the boxes next to the users you want to edit. As you check names, a banner will appear at the bottom of the page showing how many users are selected. In that banner, click Edit labels.
In the bulk edit modal, enter the labels you want to add to the users, then click Set labels.
After applying labels to users, you can group by user label while viewing Eng Intelligence metrics.
Configuring Groupings for Jira Metrics
You can add custom groupings to Jira Issues based on labels, issue types, and components. The number of tickets completed for each grouping will be calculated in Eng Intelligence using the custom name you configure for the grouping.