Automate AI Readiness
To configure your Cortex workspace for AI Readiness, we recommend the following actions:
Connect Data: Ingest data and ensure ownership is assigned to your entities
Standardize: Configure a Scorecard to enforce standards for your AI tools
Streamline: Automate AI tooling requests via Workflows, and streamline your abilities to obtain information from Cortex using Cortex MCP
Improve: Review Eng Intelligence metrics and look for trends relating to AI tool usage
Use Cortex features to drive AI Readiness
Expand the tiles below to learn about configuring Cortex features to drive AI Readiness.
Step 1: Ingest data and solve ownership 🔌
For AI initiatives to succeed, it is crucial to import your services, resources, infrastructure, and other entities, and to have clear visibility into the ownership of your entities.
Connecting your entities to Cortex establishes a single source of truth across your engineering organization. It enables the ability to get timely information from Cortex MCP, track progress via Scorecards, automate Workflows, and gain insights from Eng Intelligence.
In addition to the built-in entity types that Cortex supports, you can create custom entity types to represent your AI-related entities. For example, you might want to create a type called "AI Models" to categorize models.

Setting ownership of entities ensures that every service and system is clearly linked to accountable teams or individuals, enabling faster incident response, reducing handoff friction, and making it possible to enforce standards consistently.
Relevant integrations
To focus on driving AI Readiness, Cortex recommends integrating with tools that provide visibility and control over code, deployments, monitoring, on-call, and documentation. This allows you to see the impact that AI tooling has on efficiency across engineering processes.
Make sure you have configured integrations for the following categories:
Version control: Azure DevOps, Bitbucket, GitHub, GitLab
Enforce best practices to ensure you're in a good position to adopt more AI tooling
Project management: GitHub, Jira, Azure DevOps, ClickUp
Track incidents, bugs, and compliance issues
External docs: Cortex also recommends linking to runbooks and documentation for your entities, ensuring your users have access to critical information.
Use Scorecards to ensure that your entities contain external docs links to runbooks.
With your data in Cortex, you have a jumping-off point to start driving AI Readiness.
Step 2: Configure Scorecards to track AI Readiness standards 📋
Action Item: Create a Scorecard for AI Readiness
Scorecards automate the process of checking whether services meet criteria for your AI Readiness goals. Cortex's AI Readiness template includes a set of predefined rules which can be customized based on your organization's requirements, infrastructure, and goals. It is structured into three levels — Bronze, Silver, and Gold — with each representing increasing levels of AI Readiness.
The Scorecard template contains rules that check for industry best practices, such as:
Service ownership is defined
SLOs are defined
AI model security scanning
Step 2.1: Create the Scorecard and configure the basics
On the Scorecards page in your workspace, click Create Scorecard.
On the
AI Readiness
template, click Use.
Configure basic settings, including the Scorecard's name, unique identifier, description, and more.
Learn about configuring the basic settings in the Creating a Scorecard documentation.
Step 2.2: Review and modify the rules
While Cortex's template is based on common industry standards, you may need to adjust the rules based on which tools you use and how your organization prioritizes standards and requirements. You can reorder, delete, and edit rules, you can add more rules to a level, and you can assign more points to a rule to signify its importance.
When adding or changing the template rules, you can select from a list of available pre-built rules. Behind each rule is a Cortex Query Language (CQL) query; you can also write your own queries to further refine your rules.
Step 3: Automate processes via Workflows ⚙️
Action item: Configure Workflows
You can use Workflows to streamline and standardize processes relating to your AI Readiness initiatives.
Workflow to automate AI tool requests
You can use Workflows to automate the process of requesting access to AI tools. See an example Workflow for automating GitHub Copilot access in Dev onboarding: User access to Copilot.
Workflows to establish adherence to best practices
You can add manual approval steps in a Workflow to require sign-off from specific team members before a service is considered production-ready, ensuring accountability and providing an audit trail.
See the documentation on configuring a Manual approval block.
When Scaffolding new services, you can use templates to ensure that every new service starts with baseline standards (e.g., an incident response runbook exists, SLOs are configured, etc.).
See the documentation on registering a Scaffolder template and configuring a Scaffolder block.
Workflows based on AI Readiness Scorecards
In a Workflow, you can use an HTTP request to get an individual entity's score or the latest scores for all entities on your AI Readiness Scorecard, then configure additional blocks to take actions based on the score.
For example, you could create a Workflow that blocks deployment based on Scorecard scores, ensuring that a deployment is blocked if the entity has not met your standards for AI Readiness.
See an example of this Workflow in the template "Deploy to prod based on Scorecard score" in your Cortex workspace:
The template references a Production Readiness Scorecard by default, but you can update its HTTP Request block to point to your AI Readiness Scorecard.
When the Workflow runs, it checks whether the entity has achieved the "Gold" level standard in the Scorecard. If it has, the deployment continues. If it has not, the Workflow automatically sends a Slack message to notify the entity owner.
Obtain Scorecard scores within a Workflow to use in subsequent actions
See the example below demonstrating how to obtain an entity's Scorecard score in an HTTP block within a Workflow:
Add an HTTP request block to your Workflow.
Enter a name and unique slug for the block, then configure the remaining fields:
HTTP method:
GET
URL: Enter the Cortex API URL for obtaining the scores, e.g.,
https://api.getcortexapp.com/api/v1/scorecards/<unique-scorecard-tag>/scores?entityTag={{context.entity.tag}}
.Headers: Add the following headers:
Content-Type: application/json
Authorization: Bearer
{{context.secrets.cortex_api_key}}
Save the block.
You can reference the output of this block in subsequent blocks, allowing you to streamline the followup actions you take based on an entity's level of AI Readiness.
Step 4: Configure Cortex MCP 🤖
Action Item: Configure Cortex MCP
Cortex MCP can significantly help boost efficiency by providing instant, conversational access to information about teams, services, and operational readiness directly from your MCP client. Including adoption of Cortex MCP in your AI initiatives helps your team move faster:
It provides real-time, structured answers: Ask questions like "What are next steps for my AI Readiness Scorecard?" or "Give me all the details for parser-service." MCP fetches the data in real time from Cortex's API, ensuring accurate and up-to-date information about service health, ownership, and operational readiness.
It gives you quick access to your centralized data: Efficiency goals can be slowed down by uncertainty over who owns models, pipelines, and other AI services. Use the MCP to quickly find out which teams are accountable for each tool in your environment.
It enables quick access to Scorecard details: If you implement Scorecards to measure AI-related initiatives, you can use the MCP to understand quickly how healthy your services are and how you can improve scores.
Step 5: Review Eng Intelligence metrics attributable to AI 📈
Action Items:
Review Eng Intelligence metrics and establish baselines.
Use Eng Intelligence features — the DORA dashboard, Velocity Dashboard, and Metrics Explorer — to understand how well teams are performing before and after meeting your standards for AI readiness.

Review trends in areas such as deployment frequency, incident response, and other indicators that are important to your organization. This helps you identify areas where teams or services are not meeting standards, enabling you to take action to improve.
AI Readiness in action
Learn about what ongoing AI Readiness looks like in AI Readiness in action.
Last updated
Was this helpful?