Scorecard examples
The Scorecard use cases and examples this is section are based on engineering teams across a wide spectrum of sizes and maturity levels. Learn how to set goals and what common use cases look like.
See additional detailed examples in the following sub-pages:
Learn about Scorecard use cases in Cortex Academy
Learn more about leveraging Cortex to achieve operational readiness in the Cortex Academy course, "Cortex Solutions: Production Readiness"
Learn more about planning and managing migrations using Cortex in the Cortex Academy course, "Cortex Solutions: Migrations and Modernizations"
Setting goals in Scorecards
Start with aspirational goals
Scorecards are often aspirational. For example, an SRE team may define a Production Readiness Scorecard with 20+ criteria that they think their services should meet to be considered "ready" for SRE support.
The engineering team may not be resourced to actually meet those goals, but setting objective targets helps drive organization-wide cultural shifts and sets a baseline for conversations around tech debt, infrastructure investment, and service quality.
You can add as many levels as you want to a Scorecard. You can also add as many rules to each level as makes sense for the Scorecard, but keep in mind that an entity must pass all rules in a given level in order to progress to the next one.
Motivate developers with early wins
When creating rules, we recommend starting with smaller, achievable goals within the Scorecard's first level. Early wins help teams build confidence and momentum. Introducing too many complex rules upfront can feel overwhelming and discouraging.
Focusing on clear, incremental improvements will create a culture of steady progress. Teams will see tangible improvements in reliability, documentation, or readiness without needing a massive investment of time. Once the basic benchmarks are in place and widely adopted, you can expand to more advanced rules, ensuring the process feels supportive and achievable rather than punitive.
Remediate failed rules
Sometimes, an entity or team will fail a rule in a Scorecard. This gives you the opportunity to identify issues and take action on them, leading to incremental improvements over time. Learn more about evaluating Scorecards and remediating failed rules in Review and evaluate Scorecards.
Common Scorecard use cases and example rules
Cortex users commonly define Scorecards across several categories:
Development Maturity: Ensure services and resources conform to basic development best practices, such as established code coverage, checking in lockfiles, READMEs, package versions, and ownership.
Operational Readiness: Determine whether services and resources are ready to be deployed to production, checking for runbooks, dashboards, logs, on-call escalation policies, monitoring/alerting, and accountable owners.
Operational Maturity: Monitor whether services are meeting SLOs, on-call metrics look healthy, and post-mortem tickets are closed promptly, gauging if there too many customer-facing incidents.
Security: Mitigate security vulnerabilities, achieve security compliance across services, measure code coverage
Best Practices: Define organization-wide best practices, such as infrastructure + platform, SRE, and security. For example, the Scorecard might help you ensure the correct platform library version is being used.
Best practices are unique to every organization and every application, so make sure to work across teams to develop a Scorecard measuring your organization's standards.
The following example uses JavaScript best practices:
Last updated
Was this helpful?