All pages
Powered by GitBook
1 of 9

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Eng Intelligence

Spot problems early and drive action in Cortex

Eng Intelligence provides you with key metrics and high-level data to gain insights into services, bottlenecks in the pull request lifecycle, incident response, and more. These metrics can give context to cross-team activities and indicate areas that need deeper investigation, allowing you to quickly remediate and improve productivity across your teams.

Eng intelligence features include:

  • Dashboards:

    • The DORA Dashboard gives clear insights into how fast, reliable, and efficient your development practices are.

    • The visualizes your team's success across the software development lifecycle.

    • The provides insight into Copilot adoption and engagement across your engineering teams.

    • allow you to create shared views of the key engineering metrics that matter most to your organization

  • : Analyze trends over time and drill into the underlying data for investigation. Explore metrics for , , project management (), and incident management ().

  • : The ability to define your own custom time series metrics to power the analytics in Eng Intelligence, drawing from your integrations with Cortex or your organization's internal data. Currently custom metrics are only available in the All Metrics (Classic View) but planned to be added to the new experiences.

If you do not have Eng Intelligence in your Cortex instance, please contact your Cortex Customer Success Manager.

Learn about .

Eng Intelligence overview video

Measuring success with Eng Intelligence in Cortex

In practice, success means that teams are not only tracking metrics like cycle time, deployment frequency, change failure rate, and mean time to recovery (MTTR), but they are also taking action to make progress based on how the metrics are trending. The focus should be less about hitting specific benchmarks and more about creating continuous feedback loops that drive consistent improvement.

Eng Intelligence goals

Eng Intelligence metrics provide clear, actionable insights into how your teams work, allowing you to find opportunities to improve delivery speed, reliability, and quality.

Before setting goals, we recommend establishing a baseline on your top-priority metrics.

Early signs of improvement

Leading indicators show that teams are learning from Eng Intelligence metrics and effectively making changes in their day-to-day work. For example:

  • Faster feedback loops: Time to first PR review, Time to PR approval, and Work item lead time steadily decrease over a given period.

  • Smaller, more frequent PRs: PR size trends downward while merged PR count increases.

  • Balanced workloads: More unique PR authors over time, showing distributed contribution. For project management metrics, a balance of Work items created and Work items completed shows healthy workload management.

Outcomes of improvement

Lagging indicators can measure whether your organization is seeing tangible improvements in delivery and reliability. For example:

  • Cycle time improves from baseline.

  • Deployment frequency increases without increased failure rates.

  • MTTR decreases after incidents.

  • Higher engineering satisfaction is reported in internal surveys as bottlenecks are resolved.

Best practices for reviewing Eng Intelligence metric trends

To make your data more meaningful, we recommend the following best practices:

  • Segment by teams and services to avoid skewed organization-wide averages.

  • Look for trends rather than snapshots. Success is measured over time; a short-term fluctuation can be misleading.

  • When you identify process gaps with Eng Intelligence, take action to drive adoption of standards: Use or to encourage your teams to get their owned services aligned with standards. Use to streamline repeatable tasks for engineers so they can focus on other work.

Defining success

The following are common ways to confirm successful use of Eng Intelligence features:

  • Leadership uses the dashboards to make strategic decisions (e.g., during planning cycles).

  • Teams are actively using the dashboards to spot bottlenecks.

  • Metrics drive measurable changes in process, tooling, and culture.

  • Improvements are sustained and repeatable, not just short-term spikes.

Accessing Eng Intelligence

Prerequisites to using Eng Intelligence features

Cortex users with the View Eng Intelligence can access Eng Intelligence. Users with the Configure Eng Intelligence permission can configure Eng Intelligence settings.

Before using Eng Intelligence, make sure you have configured your version control providers, PagerDuty, and Jira with the proper permissions. See each integration's documentation page for required permissions and configuration instructions:

To get started, click Eng Intelligence from the main nav in Cortex.

See the docs for more information on each part of Eng Intelligence:

Proactive incident management: Change failure rates decrease as deploy insights are integrated.

The backlog of work items created stabilizes or decreases while work items are steadily being completed, indicating the team is successfully matching capacity to demand.

Bitbucket

  • Bitbucket data in Eng Intelligence is in private beta. Please contact your Cortex Customer Success Manager for access.

  • Because of rate limits, Bitbucket ingestion in Eng Intelligence is limited to repositories that are mapped to an entity in Cortex.

  • When using Bitbucket in Eng Intelligence, it is highly recommended to use the workspace token configuration.

  • Deploys

  • GitHub

  • GitLab

  • Jira

  • PagerDuty

  • Custom Metrics
    Velocity Dashboard
    Copilot Dashboard
    Custom Dashboards
    Metrics Explorer
    deploys
    version control
    Jira
    PagerDuty
    Custom metrics
    measuring success with Eng Intelligence below
    Scorecards
    Initiatives
    Workflows
    permission
    Azure DevOps
    DORA Dashboard
    Velocity Dashboard
    Metrics Explorer
    All Metrics (Classic View)
    In response to a long time to first review, enforce a lower SLA for reviews (such as 24 hours).
    The graphs show cycle time decreasing while deployment frequency increases.

    DORA Dashboard

    The DORA framework (DevOps Research and Assessment) is a set of metrics that help teams measure software delivery performance. Integrated into your internal developer portal, DORA metrics give engineering teams clear insights into how fast, reliable, and efficient their development practices are. This empowers teams to track progress, identify bottlenecks, and drive continuous improvement — all within the same place they manage services and deploy code. Read more about these metrics in the dora.dev guide.

    Use the DORA Dashboard in Cortex to evaluate the speed and stability of your software delivery process. Visualize your team’s success across the software process and get a clear view of cycle time, deployment frequency, change failure rate, and time to resolution.

    Looking for additional resources on enforcing DORA standards in Cortex?

    • See for guidance on using Cortex features to improve your DORA metrics.

    • Check out the , available to all Cortex customers and POVs.

    • See Cortex's "Operationalizing DORA Metrics" webinar with Google Cloud's DORA program leader, Nathen Harvey, and learn about the key takeaways .

    Using the DORA Dashboard

    To view the dashboard, click Eng Intelligence > DORA Dashboard in the main nav.

    Adjust the time range

    By default, the dashboard displays data from the last last month. To change the date range, click Last month and select a new date range.

    Apply a filter option

    For each chart, you can apply filters for group, owner, repository, user label, and more. To select filters, click Filter in the upper right corner of a graph.

    Metric visualizations

    You can select a different operation for the Cycle time and Time to resolution graphs. Options include average, sum, max, min, P95, and median. Both graphs display the average by default.

    Cycle time

    Cycle time represents the time it takes for a single PR to go through the entire coding process. Shorter cycle times indicate a more agile team that is able to quickly respond to changing needs.

    Calculation: The time between the first commit on a PR to when the PR is merged.

    Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).

    Note: This metric is not supported for or .

    Deployment frequency

    Deployment frequency measures how often your team successfully releases code to production, serving as a key indicator of your delivery velocity and operational maturity. This DORA metric reflects your team's ability to deliver value continuously and respond quickly to market demands.

    Calculation: The number of deployments over a given period of time.

    Best practice: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.

    Change failure rate

    Change failure rate measures the percentage of deployments that result in production failures, serving as a critical indicator of deployment stability and code quality. This DORA metric reveals how often your releases introduce bugs, outages, or performance issues that impact users.

    Calculation: Number of rollbacks / number of deployments created.

    Best practice: Aim for a failure rate less than 15%.

    Time to resolution (MTTR)

    Time to resolution measures how quickly your team recovers from production failures, reflecting your incident response capabilities and system resilience. Also known as mean time to recovery (MTTR), this metric indicates how well-prepared your team is to handle inevitable production issues.

    Calculation: Incident resolution time - incident opened time.

    Best practice: This benchmark may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.

    Cortex Academy "Operationalizing DORA Metrics" course
    in our blog
    Azure DevOps
    Bitbucket
    The DORA dashboard shows graphs for the key DORA metrics.
    Cycle time represents the time it takes for a single PR to go through the entire coding process.
    Deployment frequency represents how often code is deployed to a production environment
    The change failure rate graph shows the percentage of deployments that result in a failure in production.
    The time to resolution graph shows how long it takes a team to recover from failure in production.

    Dashboards

    Eng Intelligence Dashboards provide engineering leaders and teams with real-time visualizations into key metrics, velocity and productivity trends, incident and quality insights, and historical organzation-wide rollups. Dashboards are natively integrated with your Cortex catalog, Scorecards, and Initiatives, allowing you to move from insight to action.

    See the documentation linked below to learn more about each dashboard:

    • DORA Dashboard: Visualize the four key DORA metrics: Cycle time, deployment frequency, change failure rate, and MTTR.

    • Velocity Dashboard: Monitor version control metrics such as PR cycle time, PR size, and PR success rate, giving insight into your process efficiency and areas to target for improvement.

    • : Build tailored dashboards by combining metric visualizations into a unified, shareable view.

    Custom Dashboards

    Custom Dashboards (Public Beta)

    Combine metrics from multiple sources into a unified view

    Custom Dashboards is available to all customers as a Public Beta feature. You can provide us feedback here.

    Custom Dashboards make it easy to create shared views of the key engineering metrics that matter most to your organization. They allow you to:

    • Aggregate and display multiple metrics: Combine different saved report modules into a single dashboard.

    • Personalize views: Create dashboards tailored to specific teams or use cases, enabling focused monitoring.

      • For example, you might want to track specific incident response metrics or you oculd create a comprehensive view of delivery by combining metrics across deployment, version control, and project management systems.

    • Share insights: Share dashboards within your organization, ensuring your stakeholders have access to relevant information without manual reporting.

    • Drive action: By surfacing the trends that matter to you, dashboards help you identify bottlenecks, track progress toward goals, and make data-driven decisions.

    Create a Custom Dashboard

    Prerequisites

    Before getting started:

    • Make sure you have configured metric modules and saved them in Metrics Explorer.

    • You must have the Configure Eng Intelligence permission.

    Add modules to the dashboard

    1. Navigate to Eng Intelligence > Dashboards. In the upper right corner, click Create Dashboard.

    2. Enter a name and description and choose an icon, then click Create Dashboard.

    3. Select an existing metric module to add to your dashboard.

      • To create a new module, click

    Managing Custom Dashboards

    Share a Custom Dashboard

    Dashboards are automatically visible within your organization to users with the View Eng Intelligence permission, so everyone can view the same insights and stay aligned.

    To create a share link to send to another user, click the 3 dots icon in the upper right, then click Share.

    Edit a Custom Dashboard

    While viewing the dashboard, click Edit in the upper right corner. Go through the steps of , then click Save.

    Custom Dashboard examples

    See examples of Custom Dashboards below:

    Delivery Health Dashboard

    Create a dashboard for visibility into how work is flowing across teams. Combine deployment frequency with Jira-based metrics like work items completed, story points delivered, and cycle time to measure delivery throughput and identify bottlenecks.

    Incident Response Dashboard

    Create a dashboard to measure the effectiveness of your incident response process. Monitor time to resolution and incident frequency, while correlating with contributing factors such as deployment frequency and PR size to uncover root causes.

    Reliability Dashboard

    Create a dashboard to track the health of your services beyond MTTR. Monitor resolution time, incident frequency, change failure rate, and rollback frequency to understand trends of stability and resilience in your systems.

    Velocity Dashboard

    The Velocity Dashboard provides a one-stop-shop into your software development lifecycle helping you and your teams understand any bottlenecks in your process. Get a clear view of PR cycle time to identify bottlenecks and spot inefficiencies, compare pull request size against commit activity, and visualize your team’s success rate across the entire review process — all in one place.

    Using the Velocity Dashboard

    To view the dashboard, click Eng Intelligence > Velocity Dashboard in the main nav.

    Create module
    . You will be redirected to the Metrics Explorer where you can configure a metric then save it as a module. You will need to navigate back to the Dashboards page to create a Custom Dashboard that contains your newly-created module.
  • After selecting a module, you are redirected back to the dashboard. To add more modules, click the + icon on the right or below the module you added.\

  • adding modules to the dashboard
    Click the 3 dots icon then click Share.
    A custom dashboard is set up to show delivery health.
    A custom dashboard is set up to show incident response metrics.
    A custom dashboard is set up to show reliability metrics.
    Adjust the time range

    By default, the dashboard displays data from the last 7 days. To change the date range, click Last 7 days and select a new date.

    Apply a filter option

    You can apply filters across for team, owner, repository, user label, and more to narrow down the set of data you are reviewing. To select filters, click Filter in the upper right corner of a graph.

    Drill down

    For each metric, click on the card to drill down into the data for further analysis. After clicking into the card, you can segment the data and drill down further to view a list of data points:

    Cycle time breakdown

    Cycle time measures the complete journey of a pull request from first commit to merge into your main branch. Understanding this end-to-end process is crucial for identifying bottlenecks and optimizing your development workflow.

    Note: Cycle Time and Time to Open metrics are not supported for Azure DevOps or Bitbucket.

    The four stages of cycle time:

    • Time to open: Development phase: First commit to PR creation

      • Best practice: There is not an explicit industry benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.

    • Time to approve: Review process: First review to final approval

      • Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.

    • Time to first review: Waiting period: PR opened to initial reviewer engagement

      • Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.

    • Time to merge: Merge process: Approval to successful merge

      • Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.

    Key insight

    Most teams discover that one of these four areas makes up 50-60% of their overall cycle time, providing a clear picture of a focus area to improve overall velocity.

    Cycle time best practices

    • Set SLA targets for each stage (e.g., first review within 1 business day)

    • Implement automated code review assignments to eliminate reviewer uncertainty

    • Use draft PRs for work-in-progress to separate development time from review time

    • Track P95 metrics to identify and address outlier PRs that skew team performance

    Visualization options

    Choose from multiple statistical views (average, sum, max, min, P95, median) to analyze your data. The default average view provides a balanced perspective, while P95 highlights outliers that might indicate systemic issues.

    Cycle time x PR size

    Pull request size directly impacts development velocity. Larger PRs consistently correlate with longer cycle times across every stage of the development process - from initial review to final merge. This visualization helps you understand how PR size affects efficiency throughout your development lifecycle.

    Compare your pull request size to:

    • Time to open

    • Time to first review

    • Time to approve

    • Time to merge

    • Overall cycle time

    Key insight

    Smaller PRs move faster. They're easier to review, less likely to introduce conflicts, and require fewer revision cycles. Teams that maintain smaller, focused PRs typically see 2-3x faster cycle times and higher code quality.

    PR size best practices

    Aim for PRs under 400 lines of code. Break larger features into smaller, logical chunks that can be reviewed and merged independently.

    Visualization options

    Outliers are excluded by default. To include them, toggle on the Show outliers option.

    PR success rate

    PR success rate measures the percentage of pull requests that successfully make it from creation to merge, providing critical insight into development efficiency and process waste. A high success rate indicates focused development work, while a low rate suggests potential issues with planning, scope creep, or unclear requirements.

    Key insights

    A sudden drop in success rate often correlates with rushed feature development, unclear requirements, or experimental work that should happen in separate branches. High-performing teams maintain consistent success rates even during periods of increased PR volume.

    PR success best practices

    • Target a PR success rate above 80% for optimal efficiency

    • Review closed (unmerged) PRs weekly to identify patterns and root causes

    • Implement clearer definition-of-done criteria before starting development

    • Use draft PRs for experimental work to separate exploration from production-ready code

    • Track success rate alongside cycle time - both metrics together reveal process health

    • Consider feature flags for risky changes instead of abandoning PRs

    Visualization options

    Switch between viewing open, closed or merged PRs against success rate.

    AI Impact Dashboard for Copilot (Public Beta)

    The AI Impact Dashboard for Copilot is available to all cloud customers as a Public Beta feature. You can provide us feedback here.

    The AI Impact Dashboard provides insight into Copilot adoption and engagement across your engineering teams, including visualizations for:

    • Impact by team AI adoption rate

      • Correlate team adoption rate to delivery and reliability metrics. Adoption rates are provided daily by comparing active users against Copilot seats.

    • Impact of Copilot users vs. non-Copilot users

      • Compare delivery and reliability side-by-side between users who leveraged AI tools within the last 7 days, and those who did not. Understand whether recent AI usage affects engineering performance.

    • AI adoption trends

      • View the overall trends for AI adoption across your organization.

    Use these insights to identify issues and .

    Using the AI Impact Dashboard

    Prerequisites

    Before getting started:

    • You must have configured. See instructions below for each integration method:

      • Cortex GitHub app

        • If configured before October 14, 2025, you must to accept two new permissions.

      • Custom GitHub App

    View the Dashboard

    Navigate to to see the full list of Cortex-built and available in your workspace. Click the Copilot Dashboard.

    This Dashboard contains multiple charts that you can filter and compare with key engineering metrics:

    Impact by team AI adoption rate

    View the impact that a team's AI adoption rate has on their cycle time, deployment frequency, incident frequency, merged PRs, PR size, time to resolution, and work items completed.

    Hover over the data in the graph to see more information about a team and its metrics:

    Eng Intelligence compares active users against Copilot seats once daily.

    Copilot users vs. Non-Copilot users

    See the impact that using AI, or not using AI, has on cycle time, merged PRs, PR size, time to resolution, and work items completed. Cortex automatically creates an "AI Usage" user label that is attached to any team member who was active in Copilot within the last 7 days.

    Hover over the data in the graph to see more information about the metrics:

    Copilot adoption over time

    View your overall Copilot adoption metrics over time:

    Filter and configure the Dashboard

    Overlay a data point

    To overlay a data point, click the dropdown in the upper left corner of a chart:

    Filter the chart

    Apply filters to further configure your dashboard by:

    • Time range: In the upper right corner of the page, click the time range filter to apply a different time range. By default, the dashboard shows data from the last 30 days.

    • Display: In the upper right corner of the page, click Display and choose whether to view the graphs by day, week, or month.

    • Operation: In the upper right corner of a graph, click Average to open the dropdown menu for operations. You can choose from average, max, median, min, P95, and sum.

    Driving continuous improvement

    Use the AI Impact Dashboard to track and correlate key AI adoption metrics with engineering performance metrics. These insights allow you to identify issues in your processes, which enables the ability to take action and drive improvements.

    Example scenario: You view your dashboard and notice that teams who have higher rates of AI adoption have a lower average cycle time (the time it takes for a single PR to go through the entire coding process). However, you also see a spike in incident frequency for those teams.

    • Identify the issue: You conclude that their process is more efficient, but as a tradeoff, they're shipping code that causes more incidents. You brainstorm with the affected engineering teams to learn how their processes have changed since adopting AI. You learn that SonarQube code coverage is lower than it has been previously.

    • Take action: You already have an launched, and you see that the rule "Test coverage minimum met" is failing. You can create an on that Scorecard to ask developers to meet a particular rule by a specified deadline. In this case, you create an Initiative that asks the developers to pass the failing rule (i.e., they need to ensure higher than 80% test coverage) by the end of the month.

      • Initiatives and Scorecard tasks appear as to-do items on a developer's in Cortex.

    Custom Metrics

    Define your own custom time series metrics to power the analytics in your Eng Intelligence dashboard, drawing from your integrations with Cortex or your organization's internal data. In addition to seeing these in Eng Intelligence, you’ll also be able to view these in the entity pages and use them in Scorecards.

    After defining a custom metric, the metric data can be provided via the following methods:

    • API: Post custom metric data to Cortex via .

    • CQL: Compute data based on a CQL query that is evaluated by Cortex every 12 hours.

    Click the + at the bottom or right side of the module.
    Click a metric card to see a visual and data that can be segmented. Drill further into that data to get a list of data points.
    • Add the following permission to your custom GitHub App: admin:org read

  • Personal Access Token

    • Add the following permission to your PAT: admin:org read

  • In Cortex, you must have the View Eng Intelligence permission to view the dashboard, and you must have the Configure Eng Intelligence permission to configure it.

  • Other filters: In the upper right corner of a graph, click Filter to apply filters for entity type, entity, group, owner, repository, reviewer, reviewer user label, status, team, user, and user label.

    Review the dashboard: As developers work toward meeting the requirements, check back in on the dashboard for a real-time update into their progress.

    drive continuous improvement
    GitHub integration
    re-authorize your Cortex GitHub app
    Eng Intelligence > Dashboards
    Custom Dashboards
    Initiative
    homepage
    One of the AI Impact charts shows merged PR metrics for AI users vs. non-AI users.
    One of the AI Impact charts shows merged PR metrics for AI users vs. non-AI users.
    Hover over the Team AI adoption rate chart to learn more about a metric point.
    Hover over the chart to learn more about a metric point.
    See Copilot adoption metrics in the graph.
    Click the dropdown in the upper left to choose a data point.

    Custom metrics are currently available for use in the All Metrics (Classic View) only. Support for displaying custom metrics in the new Metrics Explorer and Dashboard experiences is on the roadmap.

    Use cases

    Some examples of custom metrics you might want to surface in Eng Intelligence include:

    • ServiceNow incident data

      • Example CQL: custom("servicenow-incidents").length

    • Custom-computed SLO metrics

    • Homegrown tools that generate metrics through custom data

    • Code coverage or vulnerability metrics from existing integrations

      • Example CQL: codecov.codeCoverage() or sonarqube.metric("coverage")

    If you are adding static or slowly changing metadata to entities, consider adding custom data instead of custom metrics. Learn more about the differences in Adding custom data.

    Managing custom metrics

    Prerequisites

    Before configuring custom metrics, your user must have the following permissions set in Cortex:

    • Configure Eng Intelligence Custom Metrics

      • This permission allows you to create, edit, and delete a custom metric definition. The fields that can be edited are the name, filter, CQL expression. This permission also includes the ability to publish the custom metric.

    • Manage Eng Intelligence Custom Metric data

      • This permission is only required if you are managing custom metrics via the API. It allows you to hit the public API to add and delete data points for an API Custom Metric.

    Define custom metrics and add metric data

    See the tabs below for instructions on creating custom metrics with CQL or via the API.

    The data retention period for custom metric data is 24 months.

    When custom metric data points are added, Cortex lags results until the end of the previous day.

    1. In Cortex, navigate to the Eng Intelligence custom metrics settings.

    2. In the upper right side of the list of custom metrics, click Add metric.

    3. Fill in the "Add metric" form:

      • Name: Enter the name of the custom metric that will appear in Eng Intelligence.

      • Key: Enter a unique identifier for the custom metric made up of letters, digits, and hyphens, e.g., my-custom-metric.

      • Selection and Entity types: Choose whether to include or exclude specific entity types.

      • Ingestion method: Choose CQL.

      • CQL expression: Enter a CQL expression. The result of the expression must be a number, otherwise it will fail validation.

      • Draft: Toggle this setting on to save the custom metric in draft state. Draft custom metrics are only visible to users with permission to configure custom metrics and do not display in the All Metrics view. Toggle this setting off to immediately enable the metric within All Metrics.

    4. Click Add metric.

    Cortex will evaluate the CQL expression every 12 hours to check for new metric data.

    1. In Cortex, navigate to the .

    2. In the upper right side of the list of custom metrics, click Add metric.

    3. Fill in the "Add metric" form:

      • Name

    Edit custom metric definition

    Note that you cannot update the key or the type, but you can edit the name, entity type filter, CQL expression, and whether the metric is published. If you need to change the key or the type, you will need to archive the current metric and re-create it with a new key.

    If you edit a CQL custom metric definition, the older values of the metric will no longer be accessible.

    To edit a custom metric:

    1. In Cortex, navigate to the Eng Intelligence custom metrics settings.

    2. In the list of metrics, locate the one you want to edit. Click the pen icon on the right side of the metric.

    3. Make any necessary changes, then click Save metric.

    Viewing custom metrics

    View custom metric definitions

    In Cortex, navigate to the Eng Intelligence custom metrics settings.

    View custom metric data

    From the main nav of Cortex, click Eng Intelligence > All Metrics. The custom metrics will appear alongside the other Eng Intelligence key metrics in the table.

    When custom metric data points are added, Cortex lags results until the end of the previous day.

    View custom metric data on an entity page

    While viewing an entity details page, click Custom metrics from the sidebar to view metrics for that entity:

    Customize the appearance of custom metrics

    You can customize your view by reordering the columns or hiding columns.

    1. In Cortex, navigate to the Eng Intelligence appearance settings.

      • Rearrange columns: In the list of metrics, click and drag each tile into your preferred order.

      • Hide a column: Click the trash icon on the right side of a metric tile.

        • To add a column back into the view, select it from the Columns drop-down. In the dropdown, the columns with a checkmark have been added already. The columns without a checkmark have been hidden.

    2. When you are done reordering or hiding columns, click Save changes at the bottom of the page.

    : Enter the name of the custom metric that will appear in Eng Intelligence.
  • Key: Enter a unique identifier for the custom metric made up of letters, digits, and hyphens, e.g., my-custom-metric.

  • Selection and Entity types: Choose whether to include or exclude specific entity types.

  • Ingestion method: Choose API.

  • Publish immediately: Toggle this setting on to make this data immediately visible to all users with access to Eng Intelligence.

  • Click Save metric.

  • After defining the metric, you can post metric data to it via the . The API endpoints for adding custom metric data points default to the current day's date and time, but note that Cortex will not display metrics until the end of the previous day.

    Bulk-add metric data via API

    .

    When using the API, it is possible to backfill custom metric data up to two years.

    Note that bulk creation of metric data via the API is subject to rate limits and cardinality limits.

    Eng Intelligence custom metrics settings
    In the upper right, click Add Metric
    Click the pen icon to edit a custom metric
    Click "Custom metrics" on the left side of an entity page
    In the upper right, click Add Metric
    Select a column then click Add to add it back

    All Metrics (Classic View)

    In addition to the Metrics Explorer and out-of-the-box dashboards, the classic table view of Eng Intelligence is available for reviewing key metrics.

    In the All Metrics page, view metrics pulled in from the Cortex deploys API, version control integrations (Azure DevOps, Bitbucket, GitHub, and GitLab), Jira, and PagerDuty.

    Accessing All Metrics

    To view, click Eng Intelligence > All Metrics in the main nav:

    Review trends in Eng Intelligence and use that knowledge to inform your . While viewing All Metrics, in the upper right corner of the page click Create Scorecard. You will be redirected to a configurable Scorecard template that measures performance, activity, and flow metrics that impact productivity.

    Using All Metrics

    The All Metrics view aggregates data from your connected entities to calculate critical metrics based on your organization's priorities. The data is presented by team, group, or individual, and can be filtered by time range. Cortex provides a set of , but you can also to track here.

    These values are recalculated every hour. For count metrics (e.g., PRs opened) , 0 is displayed if no data is available. For average metrics (e.g., average PR open to close time), N/A is displayed if no data is available to calculate averages.

    Apply time range and team filters

    By default, All Metrics displays data from the last 7 days.

    To filter by time range: In the upper right corner of Eng Intelligence, click Last 7 days, then select a new time range for your metrics display:

    To filter by team, group, or owner:

    1. Click Filter in the upper right corner.

    2. Click into Group, Owner, or Team, and select filter options.

    3. Click Apply.

    Group by team hierarchy

    By default, each Team entity in Cortex is displayed in its own dedicated row. To group by the you've created, click View as hierarchy.

    Group by entity type

    By default, All Metrics displays Team data. In the upper left corner, click the Team dropdown to select a different entity type:

    Group by user label

    After you have , you can group by labels.

    Click the Group by dropdown and select a label you want to group by. The grouping will be added as a row to the metrics table, along with separate rows for each member of the grouping.

    View more details for an entity

    To better understand the data behind a trend you see, click an entity to open a side panel with more information:

    • Under the Related activity tab, see available metrics and recent activity.

    • Under the Trends tab, see a historical performance graph for each metric.

    In the upper right corner of the panel, you can adjust the time range for the graphs to be anywhere between the last 7 days and 6 months. This will update the graph view and maps to the table, so all metrics will reflect the new timeframe.

    Show Scorecard view

    In the upper right corner, click Display. In this drop-down, you can choose whether to display entities in their associated hierarchies and you can select a Scorecard.

    When you select a Scorecard, Scorecard performance is overlayed in Eng Intelligence when grouped by team or service. This view is not available when grouping by group, user, or owner. The icon representing the Scorecard level achieved by each entity will appear next to the entity name:

    Metrics

    Users with the Configure custom metrics permission can create for All Metrics, or you can use the built-in metrics listed below.

    These metrics are pulled from the , version control integrations (Azure DevOps, Bitbucket, GitHub, and GitLab), Jira, and PagerDuty.

    Deploy metrics

    Avg deploys/week

    Calculates the average number of deploys per week over the selected time range.

    Pulls deploy data added via the .

    Deploy change failure rate

    Displays the number of rollbacks divided by number of deploys in a given time range.

    Pulls deploy data added via the .

    Version control metrics

    Avg PR open to close time
    • Calculates the average time to close pull requests for each PR opened and merged during the selected time range.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    This metric provides insight into how long it takes to merge something, such as build time, reviews, conversations, fixing linter issues, etc.

    If your Average PR open to close time is high, it’s worth investigating to identify the part of the development cycle that contribute the most to this time.

    Avg time to first review
    • Determines average time from first open to first review of a pull request for any PR that has been opened during the selected time range.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    For a subset of pull requests, this metric can provide insight into potential inefficiencies. For high figures, investigate whether this is due to the software process or roadblocks faced by team members.

    Avg time to approval
    • Displays average time from when a pull request was first opened to when it was first approved for any PR opened during the selected time range.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    Average time to approval can capture review-related bottlenecks in the PR cycle. When this figure is high, there may be opportunities to improve processes and PR sizes.

    PRs opened
    • Displays a count of pull requests opened during the selected time range.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    Pull requests opened is particularly useful as a throughput metric. When reviewing this data, consider the expected minimum activity for a developer.

    On an individual level, evaluate how much time a team member spends building features versus supporting others. You can also assess how much time a team is spending shipping code versus other teams.

    Weekly PRs merged
    • Calculates how many pull requests were opened each week, averaged across the weeks in the selected time range, to determine how many PRs were opened and merged each week.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    This throughput metric provides insight into how many things make it to the default branch and are closed out.

    In theory, this figure should match the trend for Average PR open to close time, since you don’t want too many pull requests kept open.

    Avg PRs reviewed/week
    • Calculates the number of pull requests that were reviewed each week, averaged across the selected time frame.

    • Pulls data from Azure DevOps, Bitbucket, GitHub, and GitLab.

    This metric helps users understand bottlenecks in the review stage due to load balancing work, education gaps, onboarding, career progression, and domain mastery.

    Note that this figure has been deduplicated on a per user basis, so if a user reviews a pull request multiple times, it will only display once within Eng Intelligence.

    Avg commits per PR
    • Displays the number of commits required from PR open to close, and averaged across all PRs, for any PR opened and merged during the selected time range.

    • Pulls data from Azure DevOps, GitHub, and GitLab.

    This metric provides insight into activity trends by team members, as greater activity indicates more engagement.

    Average commits per PR can be helpful during the onboarding process, so you can gauge how long it takes for a developer to reach the team’s baseline for activity.

    Avg LOC changed per PR
    • Displays the average number of lines added plus lines deleted for pull requests that were opened/merged during the selected timeframe.

    • Pulls data from GitHub and GitLab. This metric is not supported for Azure DevOps.

    This metric can provide information about pull request size. Ideally, developers should open consumable PRs that are easy to review, and thus are easy to push into production.

    Jira metrics

    Issues completed

    The number of issues completed in a given time period.

    Calculation

    Assume that you have a selected time period of 1/1/2024 - 2/1/2024.

    There are 4 Jira tickets with varying resolution dates:

    • Ticket 1: 1/5/2024, Entity 1

    Story points completed

    The number of story points completed in a given time period.

    Calculation

    Assume that you have a selected time period of 1/1/2024 - 2/1/2024.

    There are 5 Jira tickets with varying resolution dates and story points:

    • Ticket 1: 1/5/2024, 3 points, Entity 1

    Average days to complete

    The average time it takes, in days, to complete an issue in a given time period.

    Calculation

    Assume that you have a selected time period of 1/1/2024 - 2/1/2024.

    There are 5 Jira tickets with varying resolution dates. For each ticket, the day count is based on (Resolved Date) - (Created Date).

    • Ticket 1: 1/5/2024, 3 days, Entity 1

    % of sprint completed

    The count of completed tickets in any active sprint as a percentage of the total count of tickets in any active sprint for a given time period.

    Calculation

    Assume that you have selected a time period of 1/1/2024 - 2/1/2024.

    There are 4 sprints with varying start and end dates:

    • Sprint 1: 12/01/2023 to 12/15/2023 (not active)

    Issues completed (custom grouping)

    The number of issues completed in a given time period for a customized grouping of issues.

    The issue grouping is customizable, and can be made up of a combination of label, component, and issue types. The label, component, or issue type you specify for a grouping must also exist in Jira.

    For example, you could configure a “Project A Bugs” grouping in Cortex that maps to a combination of Issue type: Bug and Component: Project A. The “Project A Bugs” grouping would become a column in the table, and Eng Intelligence will display the number of tickets closed for the selected time period matching the configuration of Issue type: Bug and Component: Project A.

    PagerDuty metrics

    Mean time to resolve incidents
    • Calculates average amount of time from incident open to resolution in the selected time range.

    • Pulls data from PagerDuty.

    Incidents opened
    • Displays sum of incidents opened for that time range; based on the most recently assigned user/team for each incident.

    • Pulls data from PagerDuty.

    Incidents opened/week
    • Displays sum of of incidents opened, divided by the number of weeks in the selected time range; based on the most recently assigned user/team for each incident.

    • Pulls data from PagerDuty.

    All Metrics settings

    Change All Metrics appearance

    From the , users with the Configure Eng Intelligence permission can also choose which columns to display and adjust the order of columns in the All Metrics view.

    Set filtering for metric calculation

    Under Settings > Eng Intelligence, in , users with the Configure Eng Intelligence permission can set filters for some pre-defined metrics:

    • Under Deploys, select the deploy environments you want to include in the calculation of deploy frequency and deploy failure rate. - If none are selected, all deploys will be included.

    • Under Pull requests, select the authors you want to exclude from the calculation of PR-related metrics.

      • If none are selected, PRs from all authors will be included.

    Create and manage user labels for grouping

    User labels in Eng Intelligence allow you to group users into cohorts to analyze metrics based on different factors. This can be useful for benchmarking one engineer’s metrics against the average within a cohort, comparing metrics between engineers who use different tools to complete their work, and understanding metrics by different variables: location (e.g., in office or remote), engineer level (staff vs. lead engineer), tech stack (frontend vs. backend), and more.

    Users who have the Configure user labels permission can create and apply labels.

    The instructions below describe how to use this feature in the Cortex UI. See the Cortex API documentation for instructions on creating and managing user labels programmatically.

    Create a user label

    1. In Cortex, navigate to the .

      1. Click your avatar in the lower left then click Settings.

      2. Under Eng Intelligence, click User labeling.

    2. In the upper right corner, click Create label

    After saving, the label will appear under the Label management tab in the Eng Intelligence settings page.

    View applied user labels

    In the under the User labeling tab, you can view a list of users and their applied labels. Note that these labels are only displayed in Eng Intelligence, and not in other pages within Cortex.

    Assign a user label to a user

    1. In Cortex, navigate to the .

    2. In the list, locate the user you want to add a label to. Under the "Labels" column for that user, click the pencil icon.\

    3. In the side panel, click into the dropdown to select a pre-existing label. To create a new label, type in a name then click +Create in the dropdown.

    Assign user labels in bulk

    1. In Cortex, navigate to the .

    2. Check the boxes next to the users you want to edit. As you check names, a banner will appear at the bottom of the page showing how many users are selected. In that banner, click Edit labels.

    3. In the bulk edit modal, enter the labels you want to add to the users, then click Set labels.

    After applying labels to users, you can while viewing Eng Intelligence metrics.

    Configuring Groupings for Jira Metrics

    You can add custom groupings to Jira Issues based on labels, issue types, and components. The number of tickets completed for each grouping will be calculated in Eng Intelligence using the custom name you configure for the grouping.

    1. Navigate to the Eng Intelligence settings page and click .

    2. On the right side of the page, click Add issue grouping.

    3. In the modal, configure the issue grouping:

      • Name: Enter a name for the grouping.

    Average PR open to close time is related to other metrics, such as time to review and bottlenecks in average PRs reviewed each week. The key here is to examine the time and quantity of a particular activity.

    Note that if some teams are using draft pull requests, their numbers may be higher.

    Note that if some teams are using draft pull requests, their numbers may be higher.
    Note that if some teams are using draft pull requests, their numbers may be higher.

    Note that while this metric provides useful insight, weekly PRs merged may be a more meaningful figure.

    You may be spending too much time in the review stage if this figure is high, but you have a low number of commits and a low number of merged pull requests. If this is the case, other parts of the PR lifecycle may be at risk.

    Note that if some teams are using draft pull requests, their numbers may be higher.

    This figure can impact other metrics related to the PR cycle.

    Ticket 2: 12/1/2023, Entity 1

  • Ticket 3: 1/15/2024, Entity 2

  • Ticket 4: NULL, Entity 2

  • Entity 1 has 1 ticket completed during the timeframe (Ticket 1). Entity 2 has 1 ticket completed during the timeframe (Ticket 3).

    Ticket 2: 1/17/2024, null (0) points, Entity 1

  • Ticket 3: 12/1/2023, 5 points, Entity 1

    • This ticket does not fall within the selected time period.

  • Ticket 4: 1/15/2024, 8 points, Entity 2

  • Ticket 5: Null, 2 points, Entity 2

    • This ticket does not fall within the selected time period.

  • Entity 1 has 3 points. Entity 2 has 8 points.

    Ticket 2: 1/10/2024, 5 days, Entity 1

  • Ticket 3: 12/1/2023, 5 days, Entity 1

  • Ticket 4, 1/15/2024, 8 days, Entity 2

  • Ticket 5: Null, 2 days, Entity 2

  • Entity 1: (3 days + 5 days) / 2 = 4 Entity 2: 8 days / 1 = 8

    Sprint 2: 12/15/2023 to 1/1/2024 (active)

  • Sprint 3: 1/1/2024 to 1/15/2024 (active)

  • Sprint 4: 2/1/2024 to 2/15/2024 (active)

  • There are 5 Jira tickets aligned with varying sprints, with varying resolution dates:

    • Ticket 1: 12/14/2024, Sprint 1, Entity 1

      • Excluded because of inactive sprint

    • Ticket 2: 2/15/2023, Sprint 4, Entity 1

      • Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.

    • Ticket 3: 1/14/2023, Sprint 3, Entity 1

      • Resolution date is within selected timeframe and sprint is active. This counts as a resolved ticket and toward the total number of tickets.

    • Ticket 4: 12/17/2023, Sprint 2, Entity 2

      • Resolution date occurred before the timeframe, and in active sprint. This counts as a resolved ticket and toward the total number of tickets.

    • Ticket 5: Null resolution date, Sprint 2, Entity 2

      • Resolution date not within selected timeframe, but in active sprint. This counts toward the total number of tickets.

    To calculate the metric, we look at the # tickets resolved before the end of the sprint AND the end of the evaluation window, divided by total # tickets during the selected timeframe: Entity 1: 1 resolved ticket / 2 total = 50% Entity 2: 1 resolved tickets / 2 total = 50%

    See Configure an issue grouping below for instructions on configuring groupings.

    Calculation

    Assume that you have selected a time period of 1/1/2024 - 2/1/2024.

    There are two groupings configured:

    • Grouping 1: Label: Engineering and Issue type: Bug.

    • Grouping 2: Component: Backend

    There are 5 Jira tickets with varying resolution dates:

    • Ticket 1: 1/5/2024, Label: Engineering, Issue type: Story, Entity 1

    • Ticket 2: 1/10/2024, Label: Engineering, Issue type: Bug, Entity 1

    • Ticket 3: 1/20/2023, Component: Backend, Entity 1

    • Ticket 4: 1/15/2024, Component: Backend, Entity 2

    • Ticket 5: NULL, Label: Engineering, Issue type: Bug, Entity 2

    Entity 1:

    • For Grouping 1, there was 1 ticket within the time period (Ticket 1).

    • For Grouping 2, there was 1 ticket within the time period (Ticket 3).

    Entity 2:

    • For Grouping 1, there were 0 tickets within the time period.

    • For Grouping 2, there was 1 ticket within the time period (Ticket 4)

    By default, Cortex filters out pull requests opened by bots in GitHub but does not do this automatically for GitLab.
    .
  • Fill out the “Create label” form:

    • Name: Enter a descriptive name, e.g., Location.

    • Description: Optionally enter a description, such as "This label helps us understand metrics by location."

    • Values: Enter possible values for the label, e.g., New York, California, Remote.

  • Click Create label.

  • At the bottom of the side panel, click Set labels.

    Type: In the dropdown, select at least one issue type you want to track.

  • Component: Enter the name of the Jira component you want to track.

  • Label: Enter the name of the Jira label you want to track.

  • Click Add issue grouping.

  • Scorecards
    default metrics
    create custom metrics
    team hierarchies
    set up user labels
    custom metrics
    Cortex deploys API
    Eng Intelligence tab of Appearance settings
    the Filters tab
    User labeling settings
    Eng Intelligence settings page
    User labeling settings
    User labeling settings
    group by user label
    the Issue tracking tab
    The "Related activity" tab shows recent activity and metrics.
    A Scorecard level icon appears next to an entity name.
    Click the time range filter
    Click "Filter by"
    the API
    Cortex API
    Bulk-add multiple metric points to an entity using the API
    Cortex deploys API
    Cortex deploys API
    Solutions: DORA Metrics
    AI Readiness Scorecard

    Metrics Explorer

    Metrics Explorer enables you to analyze metric trends over time and drill into specific data points for detailed investigation. Use this tool to understand patterns in your development process and identify areas for improvement.

    You can save your favorite Metrics Explorer views as report modules, allowing you to revisit key metrics without needing to reapply filters or display settings. Saved report modules make it easy to monitor key metrics, like Cycle Time for a particular team or over a given time period, on a consistent basis, and they can be used to build Custom Dashboards.

    Access Metrics Explorer

    To view, click Eng Intelligence > Metrics Explorer from the main nav.

    View saved report modules

    On the left side of Metrics Explorer, see a list of all saved metric modules:

    Using Metrics Explorer

    Step 1: Configure a report module

    1. On the page, click the metric name in the upper left corner. By default, Cycle time is displayed.

      • A modal will appear.

    2. On the left side of the modal, select a metric. On the right side, depending on which metric you choose, you can select an operation.

    Next, you can optionally segment the metrics and apply filters before .

    Step 2: Segment and filter the metrics

    You can segment the metrics by person, entity, PR, or owner, and you can filter a graph by time range, teams, author, and repository. You can also sort the columns.

    Segment metrics

    Segment metrics

    Click the Group by dropdown below the graph to choose a different way to segment the metrics.

    The metrics segmented by team are based on the individual users within that team. In order to have data appear, the teams must have members and the team's must be configured.

    Filter by time range

    Filter by time range

    Click the time range in the upper right corner of the graph. Select a new time range and configure the dates. The graph will automatically reload as you select a time range.

    To change the grouping of the time range in the graph, click Display in the upper right corner. You can choose whether to display the data grouped by day, week, or month.

    Filter by time attribute

    Filter by time attribute

    For version control and PR-related metrics, you can filter by approval date, close/merge date, first commit date, first review date, and open date.

    Click into the time attribute filter, to the left of the date range filter:

    Filter by team, author, repository, entity type, label, and more

    Filter by team, author, repository, entity type, label, and more

    1. Click Filter in the upper right corner of the graph. You can configure a single filter or a combination of filters.

    2. When you are done adding filters, click Apply at the bottom of the filter modal.

    Sort the columns

    Sort the columns

    You can sort the data below the graph. Click Sort, then select an option.

    Step 3: Save the report module

    Once you've configured a view you'd like to revisit with a specific metric, filters, and time ranges, you can save it as a report module:

    • While viewing a module, click Save in the upper right corner of the page. Enter a name and description for the module.

    Managing saved report modules

    After saving, your report will appear in the module list in Metrics Explorer, where you can:

    • Add it to a

    • Reopen it at any time without reconfiguring filters

    • Rename, update metric/filter settings, and re-save as needed

    • Create a duplicate of the module: "Save As" a new module to create a copy of the settings as a new starting point

    All saved modules and changes to existing modules will be shared across all of your Eng Intel team members to encourage transparency and collaboration on the metrics that matter to your org.

    Share a report module

    After selecting a data point and applying filters, you can share the browser URL with other people who have access to your Cortex workspace. The URL query parameters include timestamps, so the shared Metrics Explorer page will reflect the same results across different timezones.

    Metrics available in the Metrics Explorer

    Note that metrics in Metrics Explorer sync on a scheduled basis, updating every 4 hours.

    Expand the tiles below to learn how the metric is calculated and the best practice for measuring success.

    AI tools

    AI usage metrics are pulled from .

    AI tool metrics are currently only available to cloud customers in the .

    AI adoption rate

    The percentage of licensed seats that were active users of AI coding tools in a given time period.

    Calculation: Copilot active users / Copilot total seats.

    Active AI users

    The number of users who used AI coding tools in a given time period. If a user was active in the last 7 days, Cortex will automatically attach a user label "AI User." If a user has not used Copilot in the last 30 days, Cortex will atuomatically attach the user label "Non-AI User."

    Calculation: Copilot active users.

    Deployment metrics

    Deploy metrics are pulled from the .

    Change failure rate

    The percentage of deployments that cause a failure in production.

    Calculation: Number of rollbacks / number of deployments created.

    Best practice: Aim to reduce your change failure rate over time. A rate below 15% aligns with DORA's elite benchmarks and indicates strong software delivery performance.

    Deployment frequency

    The number of deployments over a given period of time.

    Best practice: Depending on your organization, a successful benchmark could be multiple deployments per day or per week.

    Rollback frequency

    The number of rollbacks over a given period of time.

    Best practice: While there isn't an explicit benchmark, you should aim to minimize rollback rates. A low rollback rate generally aligns with a low change failure rate.

    Incident metrics

    Incident metrics are pulled from .

    Incident frequency

    The number of incidents over a given period of time.

    When you drill in to metric points below the graph, view data per incident:

    • Incident title

    • Status

    Time to resolution

    The amount of time it takes for an incident to be resolved.

    Calculation: Incident resolution time - incident opened time.

    When you drill in to metric points below the graph, view data per incident:

    • Incident title

    Project management metrics

    Project management metrics are pulled from .

    Story points completed

    The sum of story points completed in a given time period.

    When you drill in to metric points below the graph, view data per work item:

    • External key

    • Work item assignee

    Work item lead time

    The time it takes from when a work item is created to when the work item is completed.

    Calculation: Work item resolved date – Work item created date.

    When you drill in to metric points below the graph, view data per work item:

    • External key

    Work items completed

    The number of work items completed over a given period of time.

    When you drill in to metric points below the graph, view data per work item:

    • External key

    • Work item assignee

    Work items created

    The number of work items created over a given period of time.

    When you drill in to metric points below the graph, view data per work item:

    • External key

    • Work item assignee

    Version control metrics

    Version control metrics are pulled from , , , and .

    Note that any changes that rewrite Git history (such as a rebase then a force push) can impact metric timestamps or calculations.

    Note that pull request–related metrics include PRs from any branch.

    Cycle time

    The first commit on a PR to when the PR is merged. This represents the time it takes for a single PR to go through the entire coding process.

    Note: This metric is not supported for or .

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    Closed PRs

    The number of PRs closed in a given time period.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Merged PRs

    The number of PRs merged in a given time period.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Number of comments per PR

    The number of comments on a pull request.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Number of unique PR authors

    The number of unique PR authors.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Open PRs

    The number of PRs opened in a given time period.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    PR reviews count

    The number of reviews on a PR.

    When you drill in to metric points below the graph, view more data:

    • PR name

    • Reviewer

    PR size

    The number of lines of code modified in a PR.

    Note: This metric is not supported for or .

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Success rate

    The percentage of PRs that are opened and eventually merged in a given time frame.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Time to approve

    The time from the first review to the time it’s approved. This represents how long engineers are spending reviewing code. If the first review is an approval, this time will be 0 as the timestamps will be the same.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Time to first review

    The time a PR is open to when the PR gets it’s first review (comment or approval). This represents how long PRs are waiting idol before someone starts reviewing it.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Time to merge

    The time from when the PR is approved to when it’s merged.

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    • Author

    Time to open

    The time it takes from the first commit on a PR until the PR is opened. This represents the time spent coding.

    Note: This metric is not supported for or .

    When you drill in to metric points below the graph, view data per PR:

    • PR name

    Metrics Explorer reference

    Metric filter definitions

    The available filters differ based on the category of the metric you're viewing.

    Expand the tiles below to learn about each of the filters available per metric category.

    Version control metric filters
    • Entity type: View data only for specific types of entities (such as services or teams).

    • Entity: View data only for a specific entity.

    Deployment metric filters
    • Environment: View data only related to a specific environment sent via the .

    • Status: View data for work items in specified statuses.

    AI tool metric filters
    • Entity type: View data only for specific types of entities (such as services or teams).

    • Team: View data for AI usage by a specified team.

    Project management metric filters

    If you have configured a , this is not supported in filtering or segmenting data in Metrics Explorer.

    • Component: View data only for work items assigned to a specific component.

    • Entity type: View data only for specific types of entities (such as services

    Incident metric filters
    • Entity type: View data only for specific types of entities (such as services or teams).

    • Entity: View data only for a specific entity.

    Segment definitions

    "Group by" segments

    The segments available differ depending on which category of metric you're viewing.

    Version Control

    • Person

      • Author team: The team of the author of the PR.

    See the full list of available metrics below.

  • At the bottom of the modal, click View metric.

    • A graph of the metric is displayed. By default the data uses a time range of the last 30 days, but you can select a different time range.

    • Below the graph, see an overview of metrics that can be segmented by team, author, repository, entity, and more. Click into any of the metric points at the bottom of the page to drill in, seeing the data behind the metric.

  • See the full list of segmentation options per category below.
    See the full list of filters per metric category and their definitions below.

    Share a link with other users

  • Delete when no longer needed

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

  • Best practice: There is no universal benchmark. It is recommended to track trends and establish baselines within your organization.

    Status

  • Incident URL

  • Date triggered

  • Date resolved

  • Urgency

  • Time to resolution

  • Best practice: These benchmarks may differ depending on how critical a system is. For less critical systems, aim for a measure of less than 1 day. For critical systems, aim for under 1 hour.

    Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Story points completed

  • Best practice: Establish a baseline per team, as story point values can be unique to each team. Use this metric to understand capacity trends.

    Work item assignee

  • Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Status category

  • Work item lead time

  • Best practice: Lower lead times indicate a smoother process. Track trends to identify process inefficiencies and improve throughput.

    Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Best practice: Review this measure alongside how many story points have been completed; this enables you to balance both quantity and effort, ensuring teams aren't favoring lower value tasks in exchange for higher numbers of items completed.

    Assignee email

  • Work item title

  • Work item status

  • Work item type

  • Created date

  • Resolved date

  • Priority

  • Labels

  • Components

  • Best practice: Monitor this metric alongside delivery rates. If items are created faster than completed, it signals queue backlogs.

    Author
  • PR status

  • First commit date

  • Date closed

  • Cycle time

  • Best practice: Aim for lower cycle times to ensure a faster feedback loop and reduced context switching. Rather than benchmarking the overall cycle time, set benchmarks for the individual parts of the cycle (time to open, time to approve, time to first review, time to merge).

    PR status

  • Date closed

  • Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

    PR status

  • Date closed

  • Best practice: A high ratio of merged-to-closed PRs signals an effective review cycle.

    PR status

  • First commit date

  • Number of comments per PR

  • Best practice: This measure indicates review depth and collaboration. A lower number may signal superficial reviews.

    PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

  • Best practice: A larger number across projects can signal distributed ownership, while a consistently low number can point to bottlenecks or team burnout.

    PR status

  • Date closed

  • Best practice: Persistent backlog can indicate process inefficiencies, such as a slow review process.

    Review date

    Best practice: A higher number can indicate complex changes or low initial quality. A lower number could indicate approvals without thorough review and validation.

  • PR status

  • Number of lines added

  • Number of lines deleted

  • PR size

  • Best practice: Smaller PRs lead to faster reviews, fewer mistakes, and increased velocity. Aim for less than 400 lines, but adjust this benchmark as needed to improve review quality and velocity.

    PR status

  • Date opened

  • Date closed

  • Best practice: Higher success rates can indicate better quality code and reviews, but note that it is also important to understand the reasoning when a PR is rejected.

    PR status

  • Review date

  • Approval date

  • Time to approve

    • This is the time between the review date and the approval date.

  • Best practice: It is recommended to keep review time under 24 hours to maintain velocity and avoid a backlog of PRs.

    PR status

  • Date opened

  • First review time

  • Time to first review

    • This is the time between the open date and the first review time.

  • Best practice: It is recommended to target first review within 24 hours to ensure prompt feedback and smooth throughout.

    PR status

  • Approval date

  • Date closed

  • Time to merge

    • This is the time between the approval date and the date closed.

  • Best practice: There is not an explicit benchmark for this metric, but note that reducing this time to under an hour boosts code velocity. Using a tool that enforces automated merges can cut down delays.

    Author
  • PR status

  • First commit date

  • Date opened

  • Time to open

    • This is the time between the first commit date and the date opened.

  • Best practice: There is not an explicit benchmark for this metric, but note that increasing the time to open depends on an efficient triage of work; focus on minimizing idle time before work starts.

    Group: View data only for entities associated with a specific group.

  • Individual owner: View data for PRs on entities owned by a specific individual.

    • Example: Service X is owned by Pat. Filtering by Pat will show you PRs on Service X even if the PRs were created by other users.

  • Label: View data only for PRs tagged with a specific label.

  • Repository: View data only for a specific repository.

  • Reviewer: View data for PRs that a specified person reviewed.

  • Reviewer team: View data for PRs that members of a specified team reviewed.

  • Reviewer user label: View data for PRs that users with a specified user label reviewed.

  • Status: View data for PRs in specified statuses.

  • Team: View data for PRs authored by members of a specified team.

  • Team owner: View data for PRs on entities owned by a specified team. Note that this does not include PRs on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you PRs on Service X even if the PRs were created by another team.

  • User: View data for PRs authored by a specified user.

  • User label: View data for PRs authored by users with a specific user label.

    • Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."

  • User label name: View data for PRs authored by users with a specific user label value.

  • Entity type: View data only for specific types of entities (such as services or teams).
  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for deploys on entities owned by a specific individual.

    • Example: Service X is owned by Pat. Filtering by Pat will show you deploys on Service X even if the deploys were performed by other users.

  • Team: View data for deploys performed by members of a specified team.

  • Team owner: View data for deploys on entities owned by a specified team. Note that this does not include deploys on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you deploys on Service X even if the deploys were performed by another team.

  • User: View data for deploys performed by a specified user.

  • User label: View data for deploys performed by users with a specific user label.

  • User label name: View data for deploys performed by users with a specific user label value.

  • User: View data for AI usage by a specified user.

  • User label: View data for AI usage by users with a specific user label.

    • Note that Cortex automatically creates an "AI Usage" label for GitHub Copilot usage, which includes values "AI User (last 7 days)" and "Non-AI User (last 7 days)."

  • User label name: View data for AI usage by users with a specific user label value.

  • or
    teams
    ).
  • Entity: View data only for a specific entity.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for work items on entities owned by a specific user.

    • Example: Service X is owned by Pat. Filtering by Pat will show you work items on Service X even if the work items were created by other users.

  • Label: View data only for work items tagged with a specific label.

  • Project: View data only for work items belonging to a specific project.

  • Sprint: View data only for work items assigned to a specific sprint.

  • Status: View data for work items in specified statuses.

  • Team: View data for work items assigned to members of specified team.

  • Team owner: View data for work items on entities owned by a specific team. Note that this does not include work items on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you work items on Service X even if the work items were created by another team.

  • User: View data for work items assigned to a specified user.

  • User label: View data for work items assigned to users with a specific user label.

  • User label name: View data for work items assigned to users with a specific user label value.

  • Work item type: View data for work items of a specific type.

  • Group: View data only for entities associated with a specific group.

  • Individual owner: View data for incidents on entities owned by a specific user.

    • Example: Service X is owned by Pat. Filtering by Pat will show you incidents on Service X even if the incidents were created by other users.

  • Team: View data for incidents assigned to members of a specified team.

  • Team owner: View data for incidents on entities owned by a specific team. Note that this does not include incidents on entities owned by individual members of the team.

    • Example: Service X is owned by Team A. Filtering by Team A will show you incidents on Service X even if the incidents were created by another team.

  • User: View data for incidents assigned to a specified user.

  • User label: View data for incidents assigned to users with a specific user label.

  • User label name: View data for incidents assigned to users with a specific user label value.

  • Reviewer team: The team of the reviewer of the PR.

  • Author: The individual author of the PR.

  • Reviewer: The individual reviewer of the PR.

  • Author user label: The user label associated with the PR author.

  • Reviewer user label: The user label associated with the PR reviewer.

  • Entity

    • Group: The group associated with the entity.

    • Entity: Data is segmented by individual entities.

  • Pull request

    • Repository: The repository associated with the PR.

    • Status: The status of the PR.

    • Label: The label associated with the PR.

  • Owner

    • Team owner: Team owners of the entity associated with the PR.

    • Individual owner: Individual owners of the entity associated with the PR.

  • Deployments

    • Person

      • Team: The team of the user who performed the deployment.

      • Deployer: The person who performed the deployment.

      • User label: The user label associated the user who performed the deployment.

    • Entity

      • Group: The associated with the entity.

      • Entity: Data is segmented by individual entities.

    • Deployment

      • Environment: The environment sent via the .

    • Owner

      • Team owner: Team owner of the entity associated with the deployment.

      • Individual owner: Individual owner of the entity associated with the deployment.

    Incidents

    • Person

      • Team: The team of the user assigned to an incident.

      • Incident assignee: The individual assigned to an incident.

      • User label: The user label associated with the user assigned to an incident.

    • Entity

      • Group: The associated with the entity.

      • Entity: Data is segmented by individual entities.

    • Owner

      • Team owner: Team owner of the entity associated with the incident.

      • Individual owner: Individual owner of the entity associated with the incident.

    Project management (work items)

    If you have configured a unique defaultJQL per entity, this is not supported in filtering or segmenting data in Metrics Explorer.

    • Person:

      • Assignee team: The team of the user assigned to the work item.

      • Assignee: The individual assigned to the work item.

      • Assignee user label: The user label associated with the work item assignee.

    • Entity

      • Group: The associated with the entity.

      • Entity: Data is segmented by individual entities.

    • Project management

      • Project: The projects associated with the work item.

      • Sprint: The sprint associated with the work item.

    • Owner

      • Team owner: Team owners of entities associated with the work item.

      • Individual owner: Individual owners of entities associated with the work item.

    AI tools

    • Person:

      • Team: The teams of the user.

      • User: The individual user.

      • User label: The user label associated with user.

    Metrics Explorer
    saving the module
    identity mappings
    Custom Dashboard
    GitHub Copilot
    beta program
    PagerDuty
    Jira
    Azure DevOps
    Bitbucket
    GitHub
    GitLab
    Azure DevOps
    Bitbucket
    Azure DevOps
    Bitbucket
    Azure DevOps
    Bitbucket
    deploys API
    unique defaultJQL per entity
    Saved modules in Metrics Explorer appear on the left.
    The Metrics Explorer displays a graph, and metrics grouped by repository at the bottom.
    Group the metrics by different dimensions
    Click the time range in the upper right to open the date picker modal.
    Click the time attribute field at the top of the graph, to the left of the date range.
    Click Sort in the upper right corner of the data table, below the graph.
    In the upper right corner, click Save to save your metric module.
    Categories appear on the left side while choosing a metric in Metrics Explorer.
    Click the metric name in the upper left.
    Cortex deploys API
    group
    deploys API
    group
    group