Scorecards
Scorecards allow your team to define standards like production readiness and development quality, and enforce them without building scripts and maintaining spreadsheets.
UI
Scorecards can be managed through the UI by default, see Help Desk | Creating and Editing Scorecards for a detailed walkthrough.
GitOps
If you would like to manage Scorecards through your GitOps workflow you can disable UI editing through Settings | App Preferences | Enable Scorecard UI Editor.
In general, the best place to put Scorecards is in their own repository, separate from catalog entities, at the repository's root directory within .cortex/scorecards
. Note that it is not recommended to put Scorecard definitions in a service repository as Scorecards are not meant to be 1:1 with catalog entitites. For example, a simple repository might have the structure:
.
├── .cortex
│ └── scorecards
│ ├── dora.yml
│ └── performance.yml
└── src
└── index.js
└── ...
Any file found within the .cortex/scorecards
directory will be automatically picked up and parsed as a Scorecard.
Descriptor
The dora.yml
descriptor file might look something like this:
---
name: DORA Metrics
tag: dora-metrics
description: >-
[DORA metrics](https://www.cortex.io/post/understanding-dora-metrics) are used by DevOps teams to measure their performance.
The 4 key metrics are Lead Time for Changes, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate.
draft: false
ladder:
levels:
- name: Bronze
rank: 1
description: Pretty good
color: "#c38b5f"
- name: Silver
rank: 2
description: Very good
color: "#8c9298"
- name: Gold
rank: 3
description: Excellent
color: "#cda400"
rules:
- title: Ratio of rollbacks to deploys in the last 7 days
expression: >+
(deploys(lookback=duration("P7D"),types=["ROLLBACK"]).count /
deploys(lookback=duration("P7D"),types=["DEPLOY"]).count) > 0.05
description: Change Failure Rate
weight: 25
failureMessage: Less than 95% of deployments in the last 7 days were successful
level: Gold
- title: Incident was ack'ed within 5 minutes
expression: oncall.analysis(lookback = duration("P7D")).meanSecondsToFirstAck <= 300
description: MTTA (Mean time to acknowledge)
weight: 25
failureMessage: Incidents in the last 7 days were not ack'ed
level: Silver
- title: Last commit was within 24 hours
expression: git.lastCommit.freshness <= duration("P1D")
description: Lead Time for Changes
weight: 25
failureMessage: No commits in the last 24 hours
level: Bronze
- title: Averaging at least one deploy a day in the last 7 days
expression: deploys(lookback=duration("P7D"),types=["DEPLOY"]).count >= 7
description: Deployment Frequency
weight: 25
failureMessage: No deployments in the last 7 days
filter:
category: SERVICE
query: has_group("production")
evaluation:
window: 4
Objects
Scorecard: {
name: String
tag: String,
description: String?,
draft: Boolean?,
ladder: Ladder?,
rules: List<Rule>,
filter: Filter?
evaluation: Evaluation?
}
name | description |
---|---|
name | The human-readable name of the Scorecard |
tag | A unique slug for the Scorecard consisting of only alphanumeric characters and dashes |
description | A human-readable description of the Scorecard |
draft | Whether or not the Scorecard is a draft |
ladder | The ladder to apply to the rules |
rules | A list of rules that are evaulated each time the Scorecard is evaluated |
filter | Enables the ability to exclude entities from being evaluated by this Scorecard |
evaluation | Enables the ability to change the evaluation window for this Scorecard |
Ladder: {
levels: List<Level>
}
name | description |
---|---|
levels | The levels of the ladder |
Level: {
name: String,
rank: Int,
description: String?,
color: String
}
name | description |
---|---|
name | The human-readable name of the level |
rank | The rank of the Level within the ladder. Higher rank is better. |
description | A human-readable description of the level |
color | The hex color of the badge that is displayed with the level |
Rule: {
title: String?,
expression: String,
weight: Int,
failureMessage: String?
level: String?,
filter: Filter?
}
name | description |
---|---|
title | The human-readable name of the Rule |
expression | The CQL expression to evauluate; must evalutate to a boolean |
weight | The number of points this Rule provides when successful |
failureMessage | A human-readable message that will be presented when the Rule is failing |
level | The name of the level this rule is associated with; can be null even when a ladder is present |
filter | Enables the ability to exclude entities from being evaluated for this Rule |
Filter: {
category: String?,
query: String?
}
name | description |
---|---|
category | By default, Scorecards are evaluated against all services. You can specify the category as RESOURCE to evaluate a Scorecard against resources or DOMAIN to evaluate a Scorecard against domains |
query | A CQL query that is run against the category; only entities matching this query will be evaluated by the Scorecard |
Evaluation: {
window: Int?
}
name | description |
---|---|
window | By default, Scorecards are evaluated every 4 hours. If you would like to evaluate Scorecards less frequently, you can override the evaluation window. This can help with rate limits. Note that Scorecards cannot be evaluated more than once per 4 hours |