Set benchmarks for AI readiness

You can use Cortex's AI Readiness Scorecard template to start setting benchmarks and checking whether your services meet criteria for your goals. The AI Readiness template evaluates whether services have the foundational software engineering practices in place to safely and effectively adopt AI technologies.

Learn how to use other features for an AI Readiness use case in Solutions: AI Readiness.

Create a Scorecard for AI Readiness

Step 1: Create the Scorecard and configure its basic settings

  1. On the Scorecards page in your workspace, click Create Scorecard.

  2. On the AI Readiness template, click Use.

  3. Configure basic settings, including the Scorecard's name, unique identifier, description, and more.

Step 2: Review and modify rules

Cortex's templated rules are based on common industry standards:

AI Readiness: Bronze level rules
  • Version control in use git != null

  • Basic documentation exists git.fileExists("README.md") OR git.fileExists("docs/README.md") OR git.fileExists("API.md")

  • Service ownership defined ownership.allOwners().length > 0

  • Basic health monitoring datadog.monitors().length > 0

  • Dependency management dependencies.in().length > 0 or dependencies.out().length > 0

AI Readiness: Silver level rules
  • Automated CI/CD pipeline git.fileExists(".github/workflows/*.yml") OR git.fileExists("Jenkinsfile") OR git.fileExists(".gitlab-ci.yml") OR git.fileExists("azure-pipelines.yml")

  • Integration testing implemented

  • git.codeSearch(query = "integration.*test|test.*integration", fileExtension = "*").length > 0 OR captures("integration-tests", custom("integration_tests_exist")) == "enabled"

  • SLO defined slos().length > 0

  • Test coverage minimum met captures("test-coverage", sonarqube.metric("coverage") >= 80)

  • Secret management implemented git.fileExists(".github/workflows/*") AND git.codeSearch(query = "secret", fileExtension = "yml").length > 0

  • Incident response runbook links("RUNBOOK").length > 0

  • Deployment rollback capability captures("rollback", custom("rollback_capability")) == "enabled" OR git.codeSearch(query = "rollback|revert|previous.*version", fileExtension = "*").length > 0

AI Readiness: Gold level rules
  • Distributed tracing implemented captures("tracing", custom("distributed_tracing")) == "enabled" OR git.codeSearch(query = "jaeger|zipkin|opentelemetry|tracing", fileExtension = "*").length > 0

  • Change approval process links("CHANGE_APPROVAL").length > 0

  • Data classification documented git.fileExists("DATA-CLASSIFICATION.md") OR git.codeSearch(query = "data.*classif|pii|sensitive.*data|gdpr", fileExtension = "md").length > 0

  • Comprehensive audit logging captures("audit-log", custom("audit_logging")) == "enabled" OR git.codeSearch(query = "audit.*log|compliance.*log|access.*log", fileExtension = "*").length > 0

  • Feature flags for controlled rollouts launchDarkly != null

  • Zero critical and high vulnerabilities captures("critical-vulns", custom("critical_vulnerabilities")) == 0 AND captures("high-vulns", custom("high_vulnerabilities")) == 0

  • AI model security scanning captures("model-scan", custom("ai_model_scanning")) == "enabled" OR git.codeSearch(query = "model.*scan|adversarial.*test|bias.*detect", fileExtension = "*").length > 0

You can reorder, delete, and edit rules, add more rules to a level, and assign more points to a rule to signify its importance. Behind each rule is a Cortex Query Language (CQL) query; you can edit the existing CQL or write your own queries to further refine your rules.

Last updated

Was this helpful?