Scorecards as code

Once Scorecards are incorporated into your regular engineering workflows, they become production-grade tools. Scorecards set the benchmarks for how your organization defines “good,” which impacts everything from security standards to on-call rotations. Because Scorecards are crucial to achieving and maintaining standards, you may want to employ controlled access and version controlling.

With GitOps editing for Scorecards enabled, you manage Scorecards through GitOps, alongside managing the entities in your catalogs. Every Scorecard has its own YAML file. When GitOps is enabled, changes made to Scorecards will appear in GitOps logs.

Learn about the basics in the Scorecards documentation.

Setting up Scorecards as code

Step 1: Configure GitOps editing for Scorecards

  1. Navigate to Settings > GitOps then click the Scorecards tab.

  2. Toggle on the Enable GitOps for Scorecard editing option.

In the Scorecards tab, click the toggle to enable GitOps for Scorecard editing.

When GitOps for Scorecard editing is enabled:

  • It only affects Scorecards that are backed by a YAML file; a Scorecard backed by a YAML file can only be edited via its corresponding YAML file in your Git repository.

  • You will still be able to create and delete Scorecards via the Cortex UI.

  • You can still edit a Scorecard via the UI if it is not being managed through GitOps.

Step 2: Add Scorecards to your Git repository

You can create a Scorecard in the Cortex UI then convert it to a YAML file, or you can create a Scorecard from scratch. See an example Scorecard YAML file below.

The recommended location for Scorecards is in their own repository, separate from catalog entities, at the repository's root directory within .cortex/scorecards. Note that it is not recommended to put Scorecard definitions in a service repository as Scorecards are not meant to be 1:1 with catalog entitites.

For example, a simple repository might have the following structure:

.
├── .cortex
│    └── scorecards
│        ├── dora.yml
│        └── performance.yml
└── src
    └── index.js
    └── ...

Any file found within the .cortex/scorecards directory will be automatically picked up and parsed as a Scorecard.

Convert an existing Scorecard to code

You may have Scorecards in Cortex that were created via the UI before you started working via GitOps. To convert an existing Scorecard into code:

  1. Navigate to the Scorecard's details page in Cortex.

  2. Click the vertical ellipsis () at the top of the page, then click Export Scorecard YAML.

    Click the 3 dots icon, then click "Export Scorecard YAML."
  3. Add the Scorecard's YAML file to your Git repository.

When GitOps for Scorecard editing is enabled and your Scorecard has been converted to a YAML file in your Git repository, you will no longer be able to edit it via the Cortex UI.

Example Scorecard YAML file

The dora-metrics-scorecard.yaml descriptor file might look something like this:

name: DORA Metrics
tag: dora-metrics
description: >-
  [DORA metrics](https://www.cortex.io/post/understanding-dora-metrics) are used by DevOps teams to measure their performance.

  The 4 key metrics are Lead Time for Changes, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate.
draft: false
notifications:
  enabled: true
  scoreDropNotificationsEnabled: false
exemptions:
  enabled: true
  autoApprove: false
ladder:
  levels:
    - name: Bronze
      rank: 1
      description: Pretty good
      color: "#c38b5f"
    - name: Silver
      rank: 2
      description: Very good
      color: "#8c9298"
    - name: Gold
      rank: 3
      description: Excellent
      color: "#cda400"
rules:
  - title: Ratio of rollbacks to deploys in the last 7 days
    expression: >+
      (deploys(lookback=duration("P7D"),types=["ROLLBACK"]).length /
      deploys(lookback=duration("P7D"),types=["DEPLOY"]).length) > 0.05
    identifier: 3c42fa96-b422-30a4-b75a-f8b1cc233408
    description: Change Failure Rate
    weight: 25
    failureMessage: Less than 95% of deployments in the last 7 days were successful
    level: Gold
  - title: Incident was ack'ed within 5 minutes
    expression: oncall.analysis(lookback = duration("P7D")).meanSecondsToFirstAck <= 300
    identifier: 8713f2c0-f161-3688-9f99-bcfaab476b63
    description: MTTA (Mean time to acknowledge)
    weight: 25
    failureMessage: Incidents in the last 7 days were not ack'ed
    level: Silver
  - title: Last commit was within 24 hours
    expression: git.lastCommit().freshness = 7
    identifier: a16b7eeb-545b-359e-81a7-3946baacdd4b
    description: Deployment Frequency
    weight: 25
    failureMessage: No deployments in the last 7 days
filter:
  kind: GENERIC
  types:
    include:
      - service
  groups:
    include:
      - production
evaluation:
  window: 4

Scorecard YAML reference

Scorecard objects

name

description

name

The human-readable name of the Scorecard

tag

A unique slug for the Scorecard consisting of only alphanumeric characters and dashes

description

A human-readable description of the Scorecard

draft

Whether or not the Scorecard is a draft

notifications

Notifications settings for the Scorecard

ladder

The ladder to apply to the rules

rules

A list of rules that are evaluated each time the Scorecard is evaluated

filter

Enables the ability to exclude entities from being evaluated by this Scorecard

evaluation

Enables the ability to change the evaluation window for this Scorecard

Notifications

name

description

enabled

Whether or not to include the Scorecard in notifications

scoreDropNotificationsEnabled

Whether to notify about entities' scores that dropped after each evaluation of this Scorecard

Exemptions

name

description

enabled

Whether or not rule exemptions are enabled for Scorecard

autoApprove

Whether or not rule exemptions are auto approved for Scorecard

Ladder

name

description

levels

The levels of the ladder

Level

name

description

name

The human-readable name of the level

rank

The rank of the Level within the ladder. Higher rank is better.

description

A human-readable description of the level

color

The hex color of the badge that is displayed with the level

Rules

name

description

title

The human-readable name of the Rule

expression

The CQL expression to evaluate; must evaluate to a boolean

identifier

Identifier of the rule, unique within Scorecard scope. Can be configured to a custom value of your choice. If omitted, we will automatically generate a value.

description

A human-readable description of the Rule

weight

The number of points this Rule provides when successful

failureMessage

A human-readable message that will be presented when the Rule is failing

level

The name of the level this rule is associated with; can be null even when a ladder is present

filter

Enables the ability to exclude entities from being evaluated for this rule

effectiveFrom

Date when the rule starts being evaluated (e.g. 2024-01-01T00:00:00Z)

Filter

Note: One of types, groups, or query must be present for it to be considered a valid filter.

name

description

kind

The kind of filter to create. Currently only supports "GENERIC"

types

Types filter (to include / exclude specific types)

groups

Groups filter (to include / exclude specific groups)

query

A CQL query; only entities matching this query will be evaluated by the Scorecard

TypesFilter

Note: Only one of include or exclude can be specified at a time.

name

description

include

List of types to include in set of entities

exclude

List of types to exclude in the set of entities

GroupsFilter

name

description

include

List of groups to include in set of entities

exclude

List of groups to exclude in the set of entities

Evaluation

name

description

window

By default, Scorecards are evaluated every 4 hours. That default can be changed under Settings > Scorecards > Default evaluation window. If you would like to evaluate Scorecards less frequently, you can override the evaluation window. This can help if you're encountering problems with rate limits. Note that Scorecards cannot be evaluated more than once per minimum set in Settings > Scorecards > Minimum evaluation window.

Last updated

Was this helpful?