Establish consistent AI security controls
Organizations often experience pain points around AI practices where AI models or pipelines live in isolated repositories, security and compliance requirements are inconsistently applied, and there's no unified way to measure quality across teams.
To improve consistency across your AI practices:
You can launch an AI Governance Scorecard. Cortex provides an AI Governance template in-app, which can be modified based on your organization's needs.
You can launch an Initiative associated with the Scorecard, which gives your engineers a deadline for when to complete certain goals.
Use reports and Cortex MCP to better understand progress and next steps.
Create an AI Governance Scorecard
Step 1: Create the Scorecard and configure its basic settings
On the Scorecards page in your workspace, click Create Scorecard.
On the
AI Governancetemplate, click Use.Configure basic settings, including the Scorecard's name, unique identifier, description, and more.
Learn about configuring the basic settings in the Creating a Scorecard documentation.
Step 2: Review and modify rules
Cortex's templated rules are based on common industry standards:
AI Governance: Bronze level rules
Secrets scanning and management
git.fileExists(".github/workflows/*") AND git.codeSearch(query = "secret", fileExtension = "yml").length > 0PR reviews required from two or more reviewers
git.branchProtection().numReviewsRequired > 1AI security documentation and guidelines
git.fileExists("AI-SECURITY.md") OR git.fileExists("docs/ai-security.md") OR git.fileExists("docs/AI_USAGE_POLICY.md") OR git.fileExists("RESPONSIBLE_AI.md")Dependency vulnerability scanning
git.fileExists(".github/workflows/*") AND (git.codeSearch(query = "dependabot", fileExtension = "yml").length > 0 OR git.codeSearch(query = "safety", fileExtension = "yml").length > 0 OR git.codeSearch(query = "snyk", fileExtension = "yml").length > 0)PR reviews from CODEOWNERS
git.branchProtection().codeOwnerReviewsRequired == "true"AI service configuration security
git.fileExists("AI-SERVICE-CONFIGURATION-POLICY.md")
AI Governance: Silver level rules
Mitre ATLAS matrix
custom("owners-reviewed-mitre-atlas-matrix") == "true"Monitoring and alerting for AI applications
datadog.monitors().filter((monitor) => monitor.name.matches(".*ai.*|.*model.*|.*ml.*")).length > 0Automated security testing in CI/CD
git.fileExists(".github/workflows/*") AND (git.codeSearch(query = "security", fileExtension = "yml").length > 0 OR git.codeSearch(query = "sast", fileExtension = "yml").length > 0 OR git.codeSearch(query = "container.*scan", fileExtension = "yml").length > 0)Data privacy and PII protection measures
git.fileExists("PRIVACY.md") OR git.fileExists("DATA-HANDLING.md") OR git.fileExists("docs/privacy.md") OR git.codeSearch(query = "PII", fileExtension = "md").length > 0AI model access controls and authentication
git.fileExists("AI-MODEL-ACCESS-CONTROLS.md")External AI vendor risk assessment
git.fileExists("AI-VENDORS.md") OR git.fileExists("APPROVED-AI-SERVICES.md") OR git.fileExists("docs/ai-vendor-security.md")
AI Governance: Gold level rules
Incident response plan for AI security
git.fileExists("AI-INCIDENT-RESPONSE.md") OR git.fileExists("docs/ai-incidents.md") OR git.codeSearch(query = "ai.*incident|model.*breach", fileExtension = "md").length > 0Standford NLP version
packageVersion("stanfordnlp") >= semver("4.5.10")AI ethics and bias testing framework
git.fileExists("ETHICS.md") OR git.fileExists("BIAS-TESTING.md")AI security training and awareness documentation
git.fileExists("AI-TRAINING.md") OR git.fileExists("docs/ai-security-training.md") OR git.codeSearch(query = "training|awareness|security.*guideline", fileExtension = "md").length > 0No secret scanning vulnerabilities
git.numOfVulnerabilities(source=["GITHUB_SECRET_SCANNING"]) == 0Adversarial attack detection and prevention
git.fileExists("ADVERSARIAL-TESTING.md") OR git.codeSearch(query = "tests/*adverserial*").length > 0No critical vulnerabilities
git.numOfVulnerabilities(severity=["CRITICAL"]) == 0AI compliance and regulatory documentation
git.fileExists("AI-COMPLIANCE.md") OR git.fileExists("NIST-AI-RMF.md") OR git.fileExists("docs/ai-governance.md") OR git.codeSearch(query = "compliance|regulation|gdpr|nist", fileExtension = "md").length > 0Open NLP version
packageVersion("opennlp") >= semver("2.5.5")
You can reorder, delete, and edit rules, add more rules to a level, and assign more points to a rule to signify its importance. Behind each rule is a Cortex Query Language (CQL) query; you can edit the existing CQL or write your own queries to further refine your rules.
Create an AI Governance Initiative
Follow the steps below to create an Initiative:
Create an AI Governance Initiative
To motivate change by a certain deadline, you can create an Initiative:
While viewing your AI Governance Scorecard, click Create Initiative in the upper right.
Configure the Initiative fields, including a descriptive name so your team members understand the purpose of the Initiative. For example,
Complete Bronze level AI Governance rules by end of quarter.Make sure to enable notifications so users are notified if an entity they own is failing the Initiative's goal.
Save the Initiative.
After the Initiative is published, entity owners will be notified if their entity is not meeting the goal.
Learn more about creating Initiatives in the docs.
Measuring success
To understand progress of your Scorecard:
Ask Cortex MCP, "How is my AI Governance Scorecard doing?" The MCP will respond with information on the entities that are failing rules and suggested next steps.
Review reports: The Bird's Eye report gives insight into how entities are performing against the Scorecard by visualizing the data as a heat map:

You can also review your Engineering Intelligence metrics for impact on key engineering metrics, such as:
MTTR: With best practices in place, such as incident response plans and AI security runbooks linked, you should see faster incident response.
Incident frequency: You may see less incidents overall with the implementation of rules such as requiring more than one PR review and proactively ensuring there are no critical vulnerabilities.
Last updated
Was this helpful?