Scorecard examples
The Scorecard use cases and examples on this page are based on engineering teams across a wide spectrum of sizes and maturity levels:
Learn about Scorecard use cases in Cortex Academy
Learn more about leveraging Cortex to achieve operational readiness in the Cortex Academy course, "Cortex Solutions: Production Readiness"
Learn more about planning and managing migrations using Cortex in the Cortex Academy course, "Cortex Solutions: Migrations and Modernizations"
Starting with aspirational goals
Scorecards are often aspirational. For example, an SRE team may define a Production Readiness Scorecard with 20+ criteria that they think their services should meet to be considered "ready" for SRE support.
The engineering team may not be resourced to actually meet those goals, but setting objective targets helps drive organization-wide cultural shifts and sets a baseline for conversations around tech debt, infrastructure investment, and service quality.
DORA metrics example
See a DORA Metrics Scorecard as an example:

Assume the following for an entity:
Its last commit was within 24 hours
There were zero rollbacks in the last 7 days
The ratios of incidents and rollbacks to deploys in the last 7 days are both zero
It hasn't averaged at least one deploy per day in the last week
In this case, the entity would have No Level.
It's passing all of the rules in the Bronze level, but failing a rule in the Steel level. Because of this, it has not achieved the Steel level, the first one that an entity can pass.
Once the entity averages at least one deploy a day in the last week, it will automatically achieve the Bronze level, since it's passing the other four rules required for that level.
This kind of gamification motivates developers to not only progress through the levels, but to maintain the quality of their entities over time.
We recommend making each level of the Scorecard achievable, even if challenging, to keep developers motivated.
You can add as many levels as you want to a Scorecard. You can also add as many rules to each level as makes sense for the Scorecard, but keep in mind that an entity must pass all rules in a given level in order to progress to the next one.
Using CQL captures to display the cause of rule failures
CQL captures allow you to extract specific values from entity data when a rule fails, making it easier for engineers to understand what went wrong. A typical use of captures is to show key quality metrics, such as code coverage, or detailed data like vulnerabilities in Scorecard rule failure messages. This helps engineers quickly understand why a rule is failing for an entity.
Example 1: Capturing code coverage from SonarQube
You can use captures to surface quality metrics such as code coverage from third-party integrations like SonarQube.
Decide what data to show in the failure message. In this example, we want to display the code coverage metric reported by SonarQube.
While configuring a Scorecard, add a rule using a CQL capture. Use a CQL expression to capture the code coverage metric and apply a threshold:
captures("code-cov", sonarqube.metric("coverage")) > 50
Customize the rule’s Failure message field to include the captured value. This message will appear when the rule fails (i.e., when coverage is 50% or lower):
This entity's code coverage metric from Sonarqube is:
{{context.evaluation.captures.code-cov}}%
After evaluation, view the Scorecard details. Navigate to the entity that failed the rule. Expand the failure message to view the captured code coverage value.

Example 2: Surface security vulnerability information from custom data
You can configure custom data to track any information you want to surface, or you can choose to track information pulled in from third-party integrations.
Determine what vulnerability data to display.
In this case, custom data is configured to include details like alert name, score, severity, and detection date.
Example custom data under key
security-data
:
"alerts": [
{
"vulnName": "CV-2844",
"alertName": "CVA-2844",
"vulnScore": 5.2,
"alertStatus": "ACTIVE",
"productName": "AssetManager",
"vulnSeverity": "MEDIUM",
"alertDetected": "2025-05-08T10:49:07Z"
}
]
While configuring a Scorecard, add a rule that uses captures to pull in the data you want to make more visible:
captures("security", custom('security-data')).get("alerts").length == 0
In the rule's Failure message field, configure captures to pull in the relevant information. The following example captures vulnerability information from the custom security data into a table:
# Your entity is failing because of an unresolved vulnerability
## Table of data
| Product Name | Alert Name | Score | Severity | Date Detected |
| :---: | :---: | :---: | :---: | :---: |
{{#context.evaluation.captures.security.alerts}}
| {{productName}} | {{alertName}} | {{vulnScore}} | {{vulnSeverity}} | {{alertDetected}} |
{{/context.evaluation.captures.security.alerts}}
After evaluation, inspect the failing rule in the Scorecard. Click into the affected entity and expand the rule to see the vulnerability details in a structured format.

Common Scorecard use cases and example rules
Cortex users commonly define Scorecards across several categories:
Development Maturity: Ensure services and resources conform to basic development best practices, such as established code coverage, checking in lockfiles, READMEs, package versions, and ownership.
Operational Readiness: Determine whether services and resources are ready to be deployed to production, checking for runbooks, dashboards, logs, on-call escalation policies, monitoring/alerting, and accountable owners.
Operational Maturity: Monitor whether services are meeting SLOs, on-call metrics look healthy, and post-mortem tickets are closed promptly, gauging if there too many customer-facing incidents.
Security: Mitigate security vulnerabilities, achieve security compliance across services, measure code coverage
Best Practices: Define organization-wide best practices, such as infrastructure + platform, SRE, and security. For example, the Scorecard might help you ensure the correct platform library version is being used.
Best practices are unique to every organization and every application, so make sure to work across teams to develop a Scorecard measuring your organization's standards.
The following example uses JavaScript best practices:
Last updated
Was this helpful?