Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Spot problems early and drive action in Cortex
Eng Intelligence provides you with key metrics and high-level data to gain insights into services, bottlenecks in the pull request lifecycle, incident response, and more. These metrics can give context to cross-team activities and indicate areas that need deeper investigation, allowing you to quickly remediate and improve productivity across your teams.
Eng intelligence features include:
Dashboards:
The DORA Dashboard gives clear insights into how fast, reliable, and efficient your development practices are.
The visualizes your team's success across the software development lifecycle.
The provides insight into Copilot adoption and engagement across your engineering teams.
allow you to create shared views of the key engineering metrics that matter most to your organization
: Analyze trends over time and drill into the underlying data for investigation. Explore metrics for , , project management (), and incident management ().
: The ability to define your own custom time series metrics to power the analytics in Eng Intelligence, drawing from your integrations with Cortex or your organization's internal data. Currently custom metrics are only available in the All Metrics (Classic View) but planned to be added to the new experiences.
Learn about .
In practice, success means that teams are not only tracking metrics like cycle time, deployment frequency, change failure rate, and mean time to recovery (MTTR), but they are also taking action to make progress based on how the metrics are trending. The focus should be less about hitting specific benchmarks and more about creating continuous feedback loops that drive consistent improvement.
Eng Intelligence metrics provide clear, actionable insights into how your teams work, allowing you to find opportunities to improve delivery speed, reliability, and quality.
Before setting goals, we recommend establishing a baseline on your top-priority metrics.
Leading indicators show that teams are learning from Eng Intelligence metrics and effectively making changes in their day-to-day work. For example:
Faster feedback loops: Time to first PR review, Time to PR approval, and Work item lead time steadily decrease over a given period.
Smaller, more frequent PRs: PR size trends downward while merged PR count increases.
Balanced workloads: More unique PR authors over time, showing distributed contribution. For project management metrics, a balance of Work items created and Work items completed shows healthy workload management.
Lagging indicators can measure whether your organization is seeing tangible improvements in delivery and reliability. For example:
Cycle time improves from baseline.
Deployment frequency increases without increased failure rates.
MTTR decreases after incidents.
Higher engineering satisfaction is reported in internal surveys as bottlenecks are resolved.
To make your data more meaningful, we recommend the following best practices:
Segment by teams and services to avoid skewed organization-wide averages.
Look for trends rather than snapshots. Success is measured over time; a short-term fluctuation can be misleading.
When you identify process gaps with Eng Intelligence, take action to drive adoption of standards: Use or to encourage your teams to get their owned services aligned with standards. Use to streamline repeatable tasks for engineers so they can focus on other work.
The following are common ways to confirm successful use of Eng Intelligence features:
Leadership uses the dashboards to make strategic decisions (e.g., during planning cycles).
Teams are actively using the dashboards to spot bottlenecks.
Metrics drive measurable changes in process, tooling, and culture.
Improvements are sustained and repeatable, not just short-term spikes.
To get started, click Eng Intelligence from the main nav in Cortex.
See the docs for more information on each part of Eng Intelligence:
Proactive incident management: Change failure rates decrease as deploy insights are integrated.
The backlog of work items created stabilizes or decreases while work items are steadily being completed, indicating the team is successfully matching capacity to demand.
Bitbucket data in Eng Intelligence is in private beta. Please contact your Cortex Customer Success Manager for access.
Because of rate limits, Bitbucket ingestion in Eng Intelligence is limited to repositories that are mapped to an entity in Cortex.
When using Bitbucket in Eng Intelligence, it is highly recommended to use the workspace token configuration.


The DORA framework (DevOps Research and Assessment) is a set of metrics that help teams measure software delivery performance. Integrated into your internal developer portal, DORA metrics give engineering teams clear insights into how fast, reliable, and efficient their development practices are. This empowers teams to track progress, identify bottlenecks, and drive continuous improvement — all within the same place they manage services and deploy code. Read more about these metrics in the dora.dev guide.
Use the DORA Dashboard in Cortex to evaluate the speed and stability of your software delivery process. Visualize your team’s success across the software process and get a clear view of cycle time, deployment frequency, change failure rate, and time to resolution.
Looking for additional resources on enforcing DORA standards in Cortex?
See for guidance on using Cortex features to improve your DORA metrics.
Check out the , available to all Cortex customers and POVs.
See Cortex's "Operationalizing DORA Metrics" webinar with Google Cloud's DORA program leader, Nathen Harvey, and learn about the key takeaways .
To view the dashboard, click Eng Intelligence > DORA Dashboard in the main nav.
By default, the dashboard displays data from the last last month. To change the date range, click Last month and select a new date range.
For each chart, you can apply filters for group, owner, repository, user label, and more. To select filters, click Filter in the upper right corner of a graph.
You can select a different operation for the Cycle time and Time to resolution graphs. Options include average, sum, max, min, P95, and median. Both graphs display the average by default.
Cycle time represents the time it takes for a single PR to go through the entire coding process. Shorter cycle times indicate a more agile team that is able to quickly respond to changing needs.
Calculation: The time between the first commit on a PR to when the PR is merged.
Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).
Note: This metric is not supported for or .
Deployment frequency measures how often your team successfully releases code to production, serving as a key indicator of your delivery velocity and operational maturity. This DORA metric reflects your team's ability to deliver value continuously and respond quickly to market demands.
Calculation: The number of deployments over a given period of time.
Best practice: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.
Change failure rate measures the percentage of deployments that result in production failures, serving as a critical indicator of deployment stability and code quality. This DORA metric reveals how often your releases introduce bugs, outages, or performance issues that impact users.
Calculation: Number of rollbacks / number of deployments created.
Best practice: Aim for a failure rate less than 15%.
Time to resolution measures how quickly your team recovers from production failures, reflecting your incident response capabilities and system resilience. Also known as mean time to recovery (MTTR), this metric indicates how well-prepared your team is to handle inevitable production issues.
Calculation: Incident resolution time - incident opened time.
Best practice: This benchmark may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.





Eng Intelligence Dashboards provide engineering leaders and teams with real-time visualizations into key metrics, velocity and productivity trends, incident and quality insights, and historical organzation-wide rollups. Dashboards are natively integrated with your Cortex catalog, Scorecards, and Initiatives, allowing you to move from insight to action.
See the documentation linked below to learn more about each dashboard:
DORA Dashboard: Visualize the four key DORA metrics: Cycle time, deployment frequency, change failure rate, and MTTR.
Velocity Dashboard: Monitor version control metrics such as PR cycle time, PR size, and PR success rate, giving insight into your process efficiency and areas to target for improvement.
: Build tailored dashboards by combining metric visualizations into a unified, shareable view.
Combine metrics from multiple sources into a unified view
Custom Dashboards make it easy to create shared views of the key engineering metrics that matter most to your organization. They allow you to:
Aggregate and display multiple metrics: Combine different saved report modules into a single dashboard.
Personalize views: Create dashboards tailored to specific teams or use cases, enabling focused monitoring.
For example, you might want to track specific incident response metrics or you oculd create a comprehensive view of delivery by combining metrics across deployment, version control, and project management systems.
Share insights: Share dashboards within your organization, ensuring your stakeholders have access to relevant information without manual reporting.
Drive action: By surfacing the trends that matter to you, dashboards help you identify bottlenecks, track progress toward goals, and make data-driven decisions.
Before getting started:
Make sure you have configured metric modules and saved them in Metrics Explorer.
You must have the Configure Eng Intelligence permission.
Navigate to Eng Intelligence > Dashboards. In the upper right corner, click Create Dashboard.
Enter a name and description and choose an icon, then click Create Dashboard.
Select an existing metric module to add to your dashboard.
To create a new module, click
Dashboards are automatically visible within your organization to users with the View Eng Intelligence permission, so everyone can view the same insights and stay aligned.
To create a share link to send to another user, click the 3 dots icon in the upper right, then click Share.
While viewing the dashboard, click Edit in the upper right corner. Go through the steps of , then click Save.
See examples of Custom Dashboards below:
The Velocity Dashboard provides a one-stop-shop into your software development lifecycle helping you and your teams understand any bottlenecks in your process. Get a clear view of PR cycle time to identify bottlenecks and spot inefficiencies, compare pull request size against commit activity, and visualize your team’s success rate across the entire review process — all in one place.
To view the dashboard, click Eng Intelligence > Velocity Dashboard in the main nav.
After selecting a module, you are redirected back to the dashboard. To add more modules, click the + icon on the right or below the module you added.\





By default, the dashboard displays data from the last 7 days. To change the date range, click Last 7 days and select a new date.
You can apply filters across for team, owner, repository, user label, and more to narrow down the set of data you are reviewing. To select filters, click Filter in the upper right corner of a graph.
For each metric, click on the card to drill down into the data for further analysis. After clicking into the card, you can segment the data and drill down further to view a list of data points:
Cycle time measures the complete journey of a pull request from first commit to merge into your main branch. Understanding this end-to-end process is crucial for identifying bottlenecks and optimizing your development workflow.
Note: Cycle Time and Time to Open metrics are not supported for Azure DevOps or Bitbucket.
The four stages of cycle time:
Time to open: Development phase: First commit to PR creation
Best practice: There is not an explicit industry benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.
Time to approve: Review process: First review to final approval
Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.
Time to first review: Waiting period: PR opened to initial reviewer engagement
Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.
Time to merge: Merge process: Approval to successful merge
Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.
Most teams discover that one of these four areas makes up 50-60% of their overall cycle time, providing a clear picture of a focus area to improve overall velocity.
Set SLA targets for each stage (e.g., first review within 1 business day)
Implement automated code review assignments to eliminate reviewer uncertainty
Use draft PRs for work-in-progress to separate development time from review time
Track P95 metrics to identify and address outlier PRs that skew team performance
Choose from multiple statistical views (average, sum, max, min, P95, median) to analyze your data. The default average view provides a balanced perspective, while P95 highlights outliers that might indicate systemic issues.
Pull request size directly impacts development velocity. Larger PRs consistently correlate with longer cycle times across every stage of the development process - from initial review to final merge. This visualization helps you understand how PR size affects efficiency throughout your development lifecycle.
Compare your pull request size to:
Time to open
Time to first review
Time to approve
Time to merge
Overall cycle time
Smaller PRs move faster. They're easier to review, less likely to introduce conflicts, and require fewer revision cycles. Teams that maintain smaller, focused PRs typically see 2-3x faster cycle times and higher code quality.
Aim for PRs under 400 lines of code. Break larger features into smaller, logical chunks that can be reviewed and merged independently.
Outliers are excluded by default. To include them, toggle on the Show outliers option.
PR success rate measures the percentage of pull requests that successfully make it from creation to merge, providing critical insight into development efficiency and process waste. A high success rate indicates focused development work, while a low rate suggests potential issues with planning, scope creep, or unclear requirements.
A sudden drop in success rate often correlates with rushed feature development, unclear requirements, or experimental work that should happen in separate branches. High-performing teams maintain consistent success rates even during periods of increased PR volume.
Target a PR success rate above 80% for optimal efficiency
Review closed (unmerged) PRs weekly to identify patterns and root causes
Implement clearer definition-of-done criteria before starting development
Use draft PRs for experimental work to separate exploration from production-ready code
Track success rate alongside cycle time - both metrics together reveal process health
Consider feature flags for risky changes instead of abandoning PRs
Switch between viewing open, closed or merged PRs against success rate.

The AI Impact Dashboard provides insight into Copilot adoption and engagement across your engineering teams, including visualizations for:
Impact by team AI adoption rate
Correlate team adoption rate to delivery and reliability metrics. Adoption rates are provided daily by comparing active users against Copilot seats.
Impact of Copilot users vs. non-Copilot users
Compare delivery and reliability side-by-side between users who leveraged AI tools within the last 7 days, and those who did not. Understand whether recent AI usage affects engineering performance.
AI adoption trends
View the overall trends for AI adoption across your organization.
Use these insights to identify issues and .
Before getting started:
You must have configured. See instructions below for each integration method:
Cortex GitHub app
If configured before October 14, 2025, you must to accept two new permissions.
Custom GitHub App
Navigate to to see the full list of Cortex-built and available in your workspace. Click the Copilot Dashboard.
This Dashboard contains multiple charts that you can filter and compare with key engineering metrics:
To overlay a data point, click the dropdown in the upper left corner of a chart:
Apply filters to further configure your dashboard by:
Time range: In the upper right corner of the page, click the time range filter to apply a different time range. By default, the dashboard shows data from the last 30 days.
Display: In the upper right corner of the page, click Display and choose whether to view the graphs by day, week, or month.
Operation: In the upper right corner of a graph, click Average to open the dropdown menu for operations. You can choose from average, max, median, min, P95, and sum.
Use the AI Impact Dashboard to track and correlate key AI adoption metrics with engineering performance metrics. These insights allow you to identify issues in your processes, which enables the ability to take action and drive improvements.
Example scenario: You view your dashboard and notice that teams who have higher rates of AI adoption have a lower average cycle time (the time it takes for a single PR to go through the entire coding process). However, you also see a spike in incident frequency for those teams.
Identify the issue: You conclude that their process is more efficient, but as a tradeoff, they're shipping code that causes more incidents. You brainstorm with the affected engineering teams to learn how their processes have changed since adopting AI. You learn that SonarQube code coverage is lower than it has been previously.
Take action: You already have an launched, and you see that the rule "Test coverage minimum met" is failing. You can create an on that Scorecard to ask developers to meet a particular rule by a specified deadline. In this case, you create an Initiative that asks the developers to pass the failing rule (i.e., they need to ensure higher than 80% test coverage) by the end of the month.
Initiatives and Scorecard tasks appear as to-do items on a developer's in Cortex.
Define your own custom time series metrics to power the analytics in your Eng Intelligence dashboard, drawing from your integrations with Cortex or your organization's internal data. In addition to seeing these in Eng Intelligence, you’ll also be able to view these in the entity pages and use them in Scorecards.
After defining a custom metric, the metric data can be provided via the following methods:
API: Post custom metric data to Cortex via .
CQL: Compute data based on a CQL query that is evaluated by Cortex every 12 hours.




Add the following permission to your custom GitHub App: admin:org read
Personal Access Token
Add the following permission to your PAT: admin:org read
In Cortex, you must have the View Eng Intelligence permission to view the dashboard, and you must have the Configure Eng Intelligence permission to configure it.
Review the dashboard: As developers work toward meeting the requirements, check back in on the dashboard for a real-time update into their progress.





Custom metrics are currently available for use in the All Metrics (Classic View) only. Support for displaying custom metrics in the new Metrics Explorer and Dashboard experiences is on the roadmap.
Some examples of custom metrics you might want to surface in Eng Intelligence include:
ServiceNow incident data
Example CQL: custom("servicenow-incidents").length
Custom-computed SLO metrics
Homegrown tools that generate metrics through custom data
Code coverage or vulnerability metrics from existing integrations
Example CQL: codecov.codeCoverage() or sonarqube.metric("coverage")
If you are adding static or slowly changing metadata to entities, consider adding custom data instead of custom metrics. Learn more about the differences in Adding custom data.
Before configuring custom metrics, your user must have the following permissions set in Cortex:
Configure Eng Intelligence Custom Metrics
This permission allows you to create, edit, and delete a custom metric definition. The fields that can be edited are the name, filter, CQL expression. This permission also includes the ability to publish the custom metric.
Manage Eng Intelligence Custom Metric data
This permission is only required if you are managing custom metrics via the API. It allows you to hit the public API to add and delete data points for an API Custom Metric.
See the tabs below for instructions on creating custom metrics with CQL or via the API.
The data retention period for custom metric data is 24 months.
In Cortex, navigate to the Eng Intelligence custom metrics settings.
In the upper right side of the list of custom metrics, click Add metric.
Fill in the "Add metric" form:
Name: Enter the name of the custom metric that will appear in Eng Intelligence.
Key: Enter a unique identifier for the custom metric made up of letters, digits, and hyphens, e.g., my-custom-metric.
Selection and Entity types: Choose whether to include or exclude specific entity types.
Ingestion method: Choose CQL.
CQL expression: Enter a CQL expression. The result of the expression must be a number, otherwise it will fail validation.
Draft: Toggle this setting on to save the custom metric in draft state. Draft custom metrics are only visible to users with permission to configure custom metrics and do not display in the All Metrics view. Toggle this setting off to immediately enable the metric within All Metrics.
Click Add metric.
Cortex will evaluate the CQL expression every 12 hours to check for new metric data.
In Cortex, navigate to the .
In the upper right side of the list of custom metrics, click Add metric.
Fill in the "Add metric" form:
Name
Note that you cannot update the key or the type, but you can edit the name, entity type filter, CQL expression, and whether the metric is published. If you need to change the key or the type, you will need to archive the current metric and re-create it with a new key.
If you edit a CQL custom metric definition, the older values of the metric will no longer be accessible.
To edit a custom metric:
In Cortex, navigate to the Eng Intelligence custom metrics settings.
In the list of metrics, locate the one you want to edit. Click the pen icon on the right side of the metric.
Make any necessary changes, then click Save metric.
In Cortex, navigate to the Eng Intelligence custom metrics settings.
From the main nav of Cortex, click Eng Intelligence > All Metrics. The custom metrics will appear alongside the other Eng Intelligence key metrics in the table.
When custom metric data points are added, Cortex lags results until the end of the previous day.
View custom metric data on an entity page
While viewing an entity details page, click Custom metrics from the sidebar to view metrics for that entity:
You can customize your view by reordering the columns or hiding columns.
In Cortex, navigate to the Eng Intelligence appearance settings.
Rearrange columns: In the list of metrics, click and drag each tile into your preferred order.
Hide a column: Click the trash icon on the right side of a metric tile.
To add a column back into the view, select it from the Columns drop-down. In the dropdown, the columns with a checkmark have been added already. The columns without a checkmark have been hidden.
When you are done reordering or hiding columns, click Save changes at the bottom of the page.
Key: Enter a unique identifier for the custom metric made up of letters, digits, and hyphens, e.g., my-custom-metric.
Selection and Entity types: Choose whether to include or exclude specific entity types.
Ingestion method: Choose API.
Publish immediately: Toggle this setting on to make this data immediately visible to all users with access to Eng Intelligence.
Click Save metric.
After defining the metric, you can post metric data to it via the . The API endpoints for adding custom metric data points default to the current day's date and time, but note that Cortex will not display metrics until the end of the previous day.
Bulk-add metric data via API
.
When using the API, it is possible to backfill custom metric data up to two years.
Note that bulk creation of metric data via the API is subject to rate limits and cardinality limits.





In addition to the Metrics Explorer and out-of-the-box dashboards, the classic table view of Eng Intelligence is available for reviewing key metrics.
In the All Metrics page, view metrics pulled in from the Cortex deploys API, version control integrations (Azure DevOps, Bitbucket, GitHub, and GitLab), Jira, and PagerDuty.
To view, click Eng Intelligence > All Metrics in the main nav:
Review trends in Eng Intelligence and use that knowledge to inform your . While viewing All Metrics, in the upper right corner of the page click Create Scorecard. You will be redirected to a configurable Scorecard template that measures performance, activity, and flow metrics that impact productivity.
The All Metrics view aggregates data from your connected entities to calculate critical metrics based on your organization's priorities. The data is presented by team, group, or individual, and can be filtered by time range. Cortex provides a set of , but you can also to track here.
These values are recalculated every hour. For count metrics (e.g., PRs opened) , 0 is displayed if no data is available. For average metrics (e.g., average PR open to close time), N/A is displayed if no data is available to calculate averages.
By default, All Metrics displays data from the last 7 days.
To filter by time range: In the upper right corner of Eng Intelligence, click Last 7 days, then select a new time range for your metrics display:
To filter by team, group, or owner:
Click Filter in the upper right corner.
Click into Group, Owner, or Team, and select filter options.
Click Apply.
By default, each Team entity in Cortex is displayed in its own dedicated row. To group by the you've created, click View as hierarchy.
By default, All Metrics displays Team data. In the upper left corner, click the Team dropdown to select a different entity type:
After you have , you can group by labels.
Click the Group by dropdown and select a label you want to group by. The grouping will be added as a row to the metrics table, along with separate rows for each member of the grouping.
To better understand the data behind a trend you see, click an entity to open a side panel with more information:
Under the Related activity tab, see available metrics and recent activity.
Under the Trends tab, see a historical performance graph for each metric.
In the upper right corner of the panel, you can adjust the time range for the graphs to be anywhere between the last 7 days and 6 months. This will update the graph view and maps to the table, so all metrics will reflect the new timeframe.
In the upper right corner, click Display. In this drop-down, you can choose whether to display entities in their associated hierarchies and you can select a Scorecard.
When you select a Scorecard, Scorecard performance is overlayed in Eng Intelligence when grouped by team or service. This view is not available when grouping by group, user, or owner. The icon representing the Scorecard level achieved by each entity will appear next to the entity name:
Users with the Configure custom metrics permission can create for All Metrics, or you can use the built-in metrics listed below.
These metrics are pulled from the , version control integrations (Azure DevOps, Bitbucket, GitHub, and GitLab), Jira, and PagerDuty.
From the , users with the Configure Eng Intelligence permission can also choose which columns to display and adjust the order of columns in the All Metrics view.
Under Settings > Eng Intelligence, in , users with the Configure Eng Intelligence permission can set filters for some pre-defined metrics:
Under Deploys, select the deploy environments you want to include in the calculation of deploy frequency and deploy failure rate. - If none are selected, all deploys will be included.
Under Pull requests, select the authors you want to exclude from the calculation of PR-related metrics.
If none are selected, PRs from all authors will be included.
User labels in Eng Intelligence allow you to group users into cohorts to analyze metrics based on different factors. This can be useful for benchmarking one engineer’s metrics against the average within a cohort, comparing metrics between engineers who use different tools to complete their work, and understanding metrics by different variables: location (e.g., in office or remote), engineer level (staff vs. lead engineer), tech stack (frontend vs. backend), and more.
Users who have the Configure user labels permission can create and apply labels.
The instructions below describe how to use this feature in the Cortex UI. See the Cortex API documentation for instructions on creating and managing user labels programmatically.
Create a user label
In Cortex, navigate to the .
Click your avatar in the lower left then click Settings.
Under Eng Intelligence, click User labeling.
In the upper right corner, click Create label
After saving, the label will appear under the Label management tab in the Eng Intelligence settings page.
View applied user labels
In the under the User labeling tab, you can view a list of users and their applied labels. Note that these labels are only displayed in Eng Intelligence, and not in other pages within Cortex.
Assign a user label to a user
In Cortex, navigate to the .
In the list, locate the user you want to add a label to. Under the "Labels" column for that user, click the pencil icon.\
In the side panel, click into the dropdown to select a pre-existing label. To create a new label, type in a name then click +Create in the dropdown.
Assign user labels in bulk
In Cortex, navigate to the .
Check the boxes next to the users you want to edit. As you check names, a banner will appear at the bottom of the page showing how many users are selected. In that banner, click Edit labels.
In the bulk edit modal, enter the labels you want to add to the users, then click Set labels.
After applying labels to users, you can while viewing Eng Intelligence metrics.
You can add custom groupings to Jira Issues based on labels, issue types, and components. The number of tickets completed for each grouping will be calculated in Eng Intelligence using the custom name you configure for the grouping.
Navigate to the Eng Intelligence settings page and click .
On the right side of the page, click Add issue grouping.
In the modal, configure the issue grouping:
Name: Enter a name for the grouping.
Average PR open to close time is related to other metrics, such as time to review and bottlenecks in average PRs reviewed each week. The key here is to examine the time and quantity of a particular activity.
Note that if some teams are using draft pull requests, their numbers may be higher.
Note that while this metric provides useful insight, weekly PRs merged may be a more meaningful figure.
You may be spending too much time in the review stage if this figure is high, but you have a low number of commits and a low number of merged pull requests. If this is the case, other parts of the PR lifecycle may be at risk.
Note that if some teams are using draft pull requests, their numbers may be higher.
Ticket 2: 12/1/2023, Entity 1
Ticket 3: 1/15/2024, Entity 2
Ticket 4: NULL, Entity 2
Entity 1 has 1 ticket completed during the timeframe (Ticket 1). Entity 2 has 1 ticket completed during the timeframe (Ticket 3).
Ticket 2: 1/17/2024, null (0) points, Entity 1
Ticket 3: 12/1/2023, 5 points, Entity 1
This ticket does not fall within the selected time period.
Ticket 4: 1/15/2024, 8 points, Entity 2
Ticket 5: Null, 2 points, Entity 2
This ticket does not fall within the selected time period.
Entity 1 has 3 points. Entity 2 has 8 points.
Ticket 2: 1/10/2024, 5 days, Entity 1
Ticket 3: 12/1/2023, 5 days, Entity 1
Ticket 4, 1/15/2024, 8 days, Entity 2
Ticket 5: Null, 2 days, Entity 2
Entity 1: (3 days + 5 days) / 2 = 4 Entity 2: 8 days / 1 = 8
Sprint 2: 12/15/2023 to 1/1/2024 (active)
Sprint 3: 1/1/2024 to 1/15/2024 (active)
Sprint 4: 2/1/2024 to 2/15/2024 (active)
There are 5 Jira tickets aligned with varying sprints, with varying resolution dates:
Ticket 1: 12/14/2024, Sprint 1, Entity 1
Excluded because of inactive sprint
Ticket 2: 2/15/2023, Sprint 4, Entity 1
Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.
Ticket 3: 1/14/2023, Sprint 3, Entity 1
Resolution date is within selected timeframe and sprint is active. This counts as a resolved ticket and toward the total number of tickets.
Ticket 4: 12/17/2023, Sprint 2, Entity 2
Resolution date occurred before the timeframe, and in active sprint. This counts as a resolved ticket and toward the total number of tickets.
Ticket 5: Null resolution date, Sprint 2, Entity 2
Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.
To calculate the metric, we look at the # tickets resolved before the end of the sprint AND the end of the evaluation window, divided by total # tickets during the selected timeframe: Entity 1: 1 resolved ticket / 2 total = 50% Entity 2: 1 resolved tickets / 2 total = 50%
Calculation
Assume that you have selected a time period of 1/1/2024 - 2/1/2024.
There are two groupings configured:
Grouping 1: Label: Engineering and Issue type: Bug.
Grouping 2: Component: Backend
There are 5 Jira tickets with varying resolution dates:
Ticket 1: 1/5/2024, Label: Engineering, Issue type: Story, Entity 1
Ticket 2: 1/10/2024, Label: Engineering, Issue type: Bug, Entity 1
Ticket 3: 1/20/2023, Component: Backend, Entity 1
Ticket 4: 1/15/2024, Component: Backend, Entity 2
Ticket 5: NULL, Label: Engineering, Issue type: Bug, Entity 2
Entity 1:
For Grouping 1, there was 1 ticket within the time period (Ticket 1).
For Grouping 2, there was 1 ticket within the time period (Ticket 3).
Entity 2:
For Grouping 1, there were 0 tickets within the time period.
For Grouping 2, there was 1 ticket within the time period (Ticket 4)
Fill out the “Create label” form:
Name: Enter a descriptive name, e.g., Location.
Description: Optionally enter a description, such as "This label helps us understand metrics by location."
Values: Enter possible values for the label, e.g., New York, California, Remote.
Click Create label.
Type: In the dropdown, select at least one issue type you want to track.
Component: Enter the name of the Jira component you want to track.
Label: Enter the name of the Jira label you want to track.
Click Add issue grouping.










Metrics Explorer enables you to analyze metric trends over time and drill into specific data points for detailed investigation. Use this tool to understand patterns in your development process and identify areas for improvement.
You can save your favorite Metrics Explorer views as report modules, allowing you to revisit key metrics without needing to reapply filters or display settings. Saved report modules make it easy to monitor key metrics, like Cycle Time for a particular team or over a given time period, on a consistent basis, and they can be used to build Custom Dashboards.
To view, click Eng Intelligence > Metrics Explorer from the main nav.
On the left side of Metrics Explorer, see a list of all saved metric modules:
On the page, click the metric name in the upper left corner. By default, Cycle time is displayed.
A modal will appear.
On the left side of the modal, select a metric. On the right side, depending on which metric you choose, you can select an operation.
Next, you can optionally segment the metrics and apply filters before .
You can segment the metrics by person, entity, PR, or owner, and you can filter a graph by time range, teams, author, and repository. You can also sort the columns.
Once you've configured a view you'd like to revisit with a specific metric, filters, and time ranges, you can save it as a report module:
While viewing a module, click Save in the upper right corner of the page. Enter a name and description for the module.
After saving, your report will appear in the module list in Metrics Explorer, where you can:
Add it to a
Reopen it at any time without reconfiguring filters
Rename, update metric/filter settings, and re-save as needed
Create a duplicate of the module: "Save As" a new module to create a copy of the settings as a new starting point
After selecting a data point and applying filters, you can share the browser URL with other people who have access to your Cortex workspace. The URL query parameters include timestamps, so the shared Metrics Explorer page will reflect the same results across different timezones.
Expand the tiles below to learn how the metric is calculated and the best practice for measuring success.
AI usage metrics are pulled from .
Deploy metrics are pulled from the .
Incident metrics are pulled from .
Project management metrics are pulled from .
Version control metrics are pulled from , , , and .
The available filters differ based on the category of the metric you're viewing.
Expand the tiles below to learn about each of the filters available per metric category.
See the full list of available metrics below.
At the bottom of the modal, click View metric.
A graph of the metric is displayed. By default the data uses a time range of the last 30 days, but you can select a different time range.
Below the graph, see an overview of metrics that can be segmented by team, author, repository, entity, and more. Click into any of the metric points at the bottom of the page to drill in, seeing the data behind the metric.
Share a link with other users
Delete when no longer needed
Incident URL
Date triggered
Date resolved
Urgency
Time to resolution
Best practice: There is no universal benchmark. It is recommended to track trends and establish baselines within your organization.
Status
Incident URL
Date triggered
Date resolved
Urgency
Time to resolution
Best practice: These benchmarks may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.
Assignee email
Work item title
Work item status
Work item type
Created date
Resolved date
Priority
Labels
Components
Story points completed
Best practice: Establish a baseline per team, as story point values can be unique to each team. Use this metric to understand capacity trends.
Work item assignee
Assignee email
Work item title
Work item status
Work item type
Created date
Resolved date
Priority
Labels
Components
Status category
Work item lead time
Best practice: Lower lead times indicate a smoother process. Track trends to identify process inefficiencies and improve throughput.
Assignee email
Work item title
Work item status
Work item type
Created date
Resolved date
Priority
Labels
Components
Best practice: Review this measure alongside how many story points have been completed; this enables you to balance both quantity and effort, ensuring teams aren't favoring lower value tasks in exchange for higher numbers of items completed.
Assignee email
Work item title
Work item status
Work item type
Created date
Resolved date
Priority
Labels
Components
Best practice: Monitor this metric alongside delivery rates. If items are created faster than completed, it signals queue backlogs.
PR status
First commit date
Date closed
Cycle time
Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).
PR status
Date closed
Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.
PR status
Date closed
Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.
PR status
First commit date
Number of comments per PR
Best practice: This measure indicates review depth and collaboration. A lower number may signal superficial reviews.
PR status
First commit date
Date opened
Time to open
This is the time between the first commit date and the date opened.
Best practice: A larger number across projects can signal distributed ownership, while a consistently low number can point to bottlenecks or team burnout.
PR status
Date closed
Best practice: Persistent backlog can indicate process inefficiencies, such as a slow review process.
Review date
Best practice: A higher number can indicate complex changes or low initial quality. A lower number could indicate approvals without thorough review and validation.
PR status
Number of lines added
Number of lines deleted
PR size
Best practice: Smaller PRs lead to faster reviews, fewer mistakes, and increased velocity. Aim for less than 400 lines, but adjust this benchmark as needed to improve review quality and velocity.
PR status
Date opened
Date closed
Best practice: Higher success rates can indicate better quality code and reviews, but note that it is also important to understand the reasoning when a PR is rejected.
PR status
Review date
Approval date
Time to approve
This is the time between the review date and the approval date.
Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.
PR status
Date opened
First review time
Time to first review
This is the time between the open date and the first review time.
Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.
PR status
Approval date
Date closed
Time to merge
This is the time between the approval date and the date closed.
Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.
PR status
First commit date
Date opened
Time to open
This is the time between the first commit date and the date opened.
Best practice: There is not an explicit benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.
Group: View data only for entities associated with a specific group.
Individual owner: View data for PRs on entities owned by a specific individual.
Example: Service X is owned by Pat. Filtering by Pat will show you PRs on Service X even if the PRs were created by other users.
Label: View data only for PRs tagged with a specific label.
Repository: View data only for a specific repository.
Reviewer: View data for PRs that a specified person reviewed.
Reviewer team: View data for PRs that members of a specified team reviewed.
Reviewer user label: View data for PRs that users with a specified user label reviewed.
Status: View data for PRs in specified statuses.
Team: View data for PRs authored by members of a specified team.
Team owner: View data for PRs on entities owned by a specified team. Note that this does not include PRs on entities owned by individual members of the team.
Example: Service X is owned by Team A. Filtering by Team A will show you PRs on Service X even if the PRs were created by another team.
User: View data for PRs authored by a specified user.
User label: View data for PRs authored by users with a specific user label.
Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."
User label name: View data for PRs authored by users with a specific user label value.
services or teams).Entity: View data only for a specific entity.
Group: View data only for entities associated with a specific group.
Individual owner: View data for deploys on entities owned by a specific individual.
Example: Service X is owned by Pat. Filtering by Pat will show you deploys on Service X even if the deploys were performed by other users.
Team: View data for deploys performed by members of a specified team.
Team owner: View data for deploys on entities owned by a specified team. Note that this does not include deploys on entities owned by individual members of the team.
Example: Service X is owned by Team A. Filtering by Team A will show you deploys on Service X even if the deploys were performed by another team.
User: View data for deploys performed by a specified user.
User label: View data for deploys performed by users with a specific user label.
User label name: View data for deploys performed by users with a specific user label value.
User: View data for AI usage by a specified user.
User label: View data for AI usage by users with a specific user label.
Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."
User label name: View data for AI usage by users with a specific user label value.
teamsEntity: View data only for a specific entity.
Group: View data only for entities associated with a specific group.
Individual owner: View data for work items on entities owned by a specific user.
Example: Service X is owned by Pat. Filtering by Pat will show you work items on Service X even if the work items were created by other users.
Label: View data only for work items tagged with a specific label.
Project: View data only for work items belonging to a specific project.
Sprint: View data only for work items assigned to a specific sprint.
Status: View data for work items in specified statuses.
Team: View data for work items assigned to members of specified team.
Team owner: View data for work items on entities owned by a specific team. Note that this does not include work items on entities owned by individual members of the team.
Example: Service X is owned by Team A. Filtering by Team A will show you work items on Service X even if the work items were created by another team.
User: View data for work items assigned to a specified user.
User label: View data for work items assigned to users with a specific user label.
User label name: View data for work items assigned to users with a specific user label value.
Work item type: View data for work items of a specific type.
Group: View data only for entities associated with a specific group.
Individual owner: View data for incidents on entities owned by a specific user.
Example: Service X is owned by Pat. Filtering by Pat will show you incidents on Service X even if the incidents were created by other users.
Team: View data for incidents assigned to members of a specified team.
Team owner: View data for incidents on entities owned by a specific team. Note that this does not include incidents on entities owned by individual members of the team.
Example: Service X is owned by Team A. Filtering by Team A will show you incidents on Service X even if the incidents were created by another team.
User: View data for incidents assigned to a specified user.
User label: View data for incidents assigned to users with a specific user label.
User label name: View data for incidents assigned to users with a specific user label value.
Reviewer team: The team of the reviewer of the PR.
Author: The individual author of the PR.
Reviewer: The individual reviewer of the PR.
Author user label: The user label associated with the PR author.
Reviewer user label: The user label associated with the PR reviewer.
Entity
Group: The group associated with the entity.
Entity: Data is segmented by individual entities.
Pull request
Repository: The repository associated with the PR.
Status: The status of the PR.
Label: The label associated with the PR.
Owner
Team owner: Team owners of the entity associated with the PR.
Individual owner: Individual owners of the entity associated with the PR.
Deployments
Person
Team: The team of the user who performed the deployment.
Deployer: The person who performed the deployment.
User label: The user label associated the user who performed the deployment.
Entity
Group: The associated with the entity.
Entity: Data is segmented by individual entities.
Deployment
Environment: The environment sent via the .
Owner
Team owner: Team owner of the entity associated with the deployment.
Individual owner: Individual owner of the entity associated with the deployment.
Incidents
Person
Team: The team of the user assigned to an incident.
Incident assignee: The individual assigned to an incident.
User label: The user label associated with the user assigned to an incident.
Entity
Group: The associated with the entity.
Entity: Data is segmented by individual entities.
Owner
Team owner: Team owner of the entity associated with the incident.
Individual owner: Individual owner of the entity associated with the incident.
Project management (work items)
If you have configured a unique defaultJQL per entity, this is not supported in filtering or segmenting data in Metrics Explorer.
Person:
Assignee team: The team of the user assigned to the work item.
Assignee: The individual assigned to the work item.
Assignee user label: The user label associated with the work item assignee.
Entity
Group: The associated with the entity.
Entity: Data is segmented by individual entities.
Project management
Project: The projects associated with the work item.
Sprint: The sprint associated with the work item.
Owner
Team owner: Team owners of entities associated with the work item.
Individual owner: Individual owners of entities associated with the work item.
AI tools
Person:
Team: The teams of the user.
User: The individual user.
User label: The user label associated with user.









