Skip to main content


Scorecards allow your team to define standards like production readiness and development quality, and enforce them without building scripts and maintaining spreadsheets.


Scorecards can be managed through the UI by default, see Help Desk | Creating and Editing Scorecards for a detailed walkthrough.


If you would like to manage Scorecards through your GitOps workflow you can disable UI editing through Settings | Workspace | Scorecards | UI Editor.

In general, the best place to put Scorecards is in their own repository, separate from catalog entities, at the repository's root directory within .cortex/scorecards. Note that it is not recommended to put Scorecard definitions in a service repository as Scorecards are not meant to be 1:1 with catalog entitites. For example, a simple repository might have the structure:

├── .cortex
│ └── scorecards
│ ├── dora.yml
│ └── performance.yml
└── src
└── index.js
└── ...

Any file found within the .cortex/scorecards directory will be automatically picked up and parsed as a Scorecard.


The dora-metrics-scorecard.yaml descriptor file might look something like this:

name: DORA Metrics
tag: dora-metrics
description: >-
[DORA metrics]( are used by DevOps teams to measure their performance.

The 4 key metrics are Lead Time for Changes, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate.
draft: false
enabled: true
enabled: true
autoApprove: false
- name: Bronze
rank: 1
description: Pretty good
color: "#c38b5f"
- name: Silver
rank: 2
description: Very good
color: "#8c9298"
- name: Gold
rank: 3
description: Excellent
color: "#cda400"
- title: Ratio of rollbacks to deploys in the last 7 days
expression: >+
(deploys(lookback=duration("P7D"),types=["ROLLBACK"]).length /
deploys(lookback=duration("P7D"),types=["DEPLOY"]).length) > 0.05
identifier: 3c42fa96-b422-30a4-b75a-f8b1cc233408
description: Change Failure Rate
weight: 25
failureMessage: Less than 95% of deployments in the last 7 days were successful
level: Gold
- title: Incident was ack'ed within 5 minutes
expression: oncall.analysis(lookback = duration("P7D")).meanSecondsToFirstAck <= 300
identifier: 8713f2c0-f161-3688-9f99-bcfaab476b63
description: MTTA (Mean time to acknowledge)
weight: 25
failureMessage: Incidents in the last 7 days were not ack'ed
level: Silver
- title: Last commit was within 24 hours
expression: git.lastCommit().freshness <= duration("P1D")
identifier: efbd8c51-7643-33cc-8fe7-1b46b2765dc9
description: Lead Time for Changes
weight: 25
failureMessage: No commits in the last 24 hours
level: Bronze
- title: Averaging at least one deploy a day in the last 7 days
expression: deploys(lookback=duration("P7D"),types=["DEPLOY"]).length >= 7
identifier: a16b7eeb-545b-359e-81a7-3946baacdd4b
description: Deployment Frequency
weight: 25
failureMessage: No deployments in the last 7 days
- service
- production
window: 4


Scorecard: {
name: String
tag: String,
description: String?,
draft: Boolean?,
notifications: Notifications?,
exemptions: Exemptions?,
ladder: Ladder?,
rules: List<Rule>,
filter: Filter?
evaluation: Evaluation?
nameThe human-readable name of the Scorecard
tagA unique slug for the Scorecard consisting of only alphanumeric characters and dashes
descriptionA human-readable description of the Scorecard
draftWhether or not the Scorecard is a draft
notificationsNotifications settings for the Scorecard
ladderThe ladder to apply to the rules
rulesA list of rules that are evaluated each time the Scorecard is evaluated
filterEnables the ability to exclude entities from being evaluated by this Scorecard
evaluationEnables the ability to change the evaluation window for this Scorecard
Notifications: {
enabled: Boolean
enabledWhether or not to include the Scorecard in notifications
Exemptions: {
enabled: Boolean?,
autoApprove: Boolean?
enabledWhether or not rule exemptions are enabled for Scorecard
autoApproveWhether or not rule exemptions are auto approved for Scorecard
Ladder: {
levels: List<Level>
levelsThe levels of the ladder
Level: {
name: String,
rank: Int,
description: String?,
color: String
nameThe human-readable name of the level
rankThe rank of the Level within the ladder. Higher rank is better.
descriptionA human-readable description of the level
colorThe hex color of the badge that is displayed with the level
Rule: {
title: String?,
expression: String,
identifier: String?,
description, String?,
weight: Int,
failureMessage: String?
level: String?,
filter: Filter?
effectiveFrom: Date?
titleThe human-readable name of the Rule
expressionThe CQL expression to evaluate; must evaluate to a boolean
identifierIdentifier of the rule, unique within Scorecard scope.
descriptionA human-readable description of the Rule
weightThe number of points this Rule provides when successful
failureMessageA human-readable message that will be presented when the Rule is failing
levelThe name of the level this rule is associated with; can be null even when a ladder is present
filterEnables the ability to exclude entities from being evaluated for this rule
effectiveFromDate when the rule starts being evaluated (e.g. 2024-01-01T00:00:00Z)
Filter: {
kind: String
types: TypesFilter?,
groups: GroupsFilter?,
query: String?

Note: One of types, groups, query must be present for it to be considered a valid filter
kindThe kind of filter to create. Currently only supports "GENERIC"
typesTypes filter (to include / exclude specific types)
groupsGroups filter (to include / exclude specific groups)
queryA CQL query; only entities matching this query will be evaluated by the Scorecard
TypesFilter: {
include: List<String>?,
exclude: List<String>?

Note: Only one of include/exclude can be specified at a time
includeList of types to include in set of entities
excludeList of types to exclude in the set of entities
GroupsFilter: {
include: List<String>?,
exclude: List<String>?
includeList of groups to include in set of entities
excludeList of groups to exclude in the set of entities
Evaluation: {
window: Int?
windowBy default, Scorecards are evaluated every 4 hours. If you would like to evaluate Scorecards less frequently, you can override the evaluation window. This can help with rate limits. Note that Scorecards cannot be evaluated more than once per 4 hours