# Set benchmarks for AI readiness

You can use Cortex's AI Readiness Scorecard template to start setting benchmarks and checking whether your services meet criteria for your goals. The AI Readiness template evaluates whether services have the foundational software engineering practices in place to safely and effectively adopt AI technologies.

Learn how to use other features for an AI Readiness use case in [Solutions: AI Readiness](/solutions/ai-readiness.md).

## Create a Scorecard for AI Readiness

### Step 1: Create the Scorecard and configure its basic settings

1. On the [**Scorecards** page](https://app.getcortexapp.com/admin/scorecards) in your workspace, click **Create Scorecard**.
2. On the `AI Readiness` template, click **Use**.
3. Configure basic settings, including the Scorecard's name, unique identifier, description, and more.
   * Learn about configuring the basic settings in the [Creating a Scorecard documentation](https://app.gitbook.com/o/RD51qiGImxmmq8NjALb1/s/JW7pYRxS4dHS3Hv6wxve/standardize/scorecards/create).

### Step 2: Review and modify rules

Cortex's templated rules are based on common industry standards:

<details>

<summary>AI Readiness: Bronze level rules</summary>

* Version control in use\
  `git != null`
* Basic documentation exists\
  `git.fileExists("README.md") OR git.fileExists("docs/README.md") OR git.fileExists("API.md")`
* Service ownership defined\
  `ownership.allOwners().length > 0`
* Basic health monitoring\
  `datadog.monitors().length > 0`
* Dependency management\
  `dependencies.in().length > 0 or dependencies.out().length > 0`

</details>

<details>

<summary>AI Readiness: Silver level rules</summary>

* Automated CI/CD pipeline\
  `git.fileExists(".github/workflows/*.yml") OR git.fileExists("Jenkinsfile") OR git.fileExists(".gitlab-ci.yml") OR git.fileExists("azure-pipelines.yml")`
* Integration testing implemented
* `git.codeSearch(query = "integration.*test|test.*integration", fileExtension = "*").length > 0 OR captures("integration-tests", custom("integration_tests_exist")) == "enabled"`
* SLO defined\
  `slos().length > 0`
* Test coverage minimum met\
  `captures("test-coverage", sonarqube.metric("coverage") >= 80)`
* Secret management implemented\
  `git.fileExists(".github/workflows/*") AND git.codeSearch(query = "secret", fileExtension = "yml").length > 0`
* Incident response runbook\
  `links("RUNBOOK").length > 0`
* Deployment rollback capability\
  `captures("rollback", custom("rollback_capability")) == "enabled" OR git.codeSearch(query = "rollback|revert|previous.*version", fileExtension = "*").length > 0`

</details>

<details>

<summary>AI Readiness: Gold level rules</summary>

* Distributed tracing implemented\
  `captures("tracing", custom("distributed_tracing")) == "enabled" OR git.codeSearch(query = "jaeger|zipkin|opentelemetry|tracing", fileExtension = "*").length > 0`
* Change approval process\
  `links("CHANGE_APPROVAL").length > 0`
* Data classification documented\
  `git.fileExists("DATA-CLASSIFICATION.md") OR git.codeSearch(query = "data.*classif|pii|sensitive.*data|gdpr", fileExtension = "md").length > 0`
* Comprehensive audit logging\
  `captures("audit-log", custom("audit_logging")) == "enabled" OR git.codeSearch(query = "audit.*log|compliance.*log|access.*log", fileExtension = "*").length > 0`
* Feature flags for controlled rollouts\
  `launchDarkly != null`
* Zero critical and high vulnerabilities\
  `captures("critical-vulns", custom("critical_vulnerabilities")) == 0 AND captures("high-vulns", custom("high_vulnerabilities")) == 0`
* AI model security scanning\
  `captures("model-scan", custom("ai_model_scanning")) == "enabled" OR git.codeSearch(query = "model.*scan|adversarial.*test|bias.*detect", fileExtension = "*").length > 0`

</details>

You can reorder, delete, and edit rules, add more rules to a level, and assign more points to a rule to signify its importance. Behind each rule is a [Cortex Query Language (CQL) ](/standardize/cql.md)query; you can edit the existing CQL or write your own queries to further refine your rules.&#x20;


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortex.io/guides/ai-excellence/set-benchmarks-for-ai-readiness.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
