Cortex can be deployed on-prem into your own Kubernetes cluster, if you're dealing with sensitive data and don't want information to leave your premises.
We provide the Cortex installation as a Helm chart, making it easy to get started and manage.
You'll need a few things to get started with Cortex Self-Managed:
- A running k8s cluster with permission to talk to the public internet (our Docker images are hosted on GitHub)
- Permission to add image pull secrets to the cluster
- Helm Package Manager ready to use
- The Cortex Self-Manged Helm chart, which is provided to you by the Cortex team when beginning an evaluation. If you haven't been granted access, reach out to our team for a demo of the platform.
You'll need to configure your cluster to talk to Cortex.
kubectl create secret docker-registry cortex-docker-registry-secret \
--docker-password=<Token provided to you by the Cortex team> \
Once you've created the secret,
cd into the helm chart, and run
helm install with your desired configuration.
helm install cortex .
The README contains some additional tips on configuring Cortex when running locally in a Minikube cluster.
If you're installing Cortex into a cloud cluster, for example EKS/GKE, you'll need some additional configuration.
- Ensure that your
values.yamlfile in the Helm chart has been modified as desired.
- Install any required plugins to your cluster, e.g.
kubectl applythe nginx plugin for AWS
- Make sure the Docker secret in the previous section has been added to the cluster
- Install the Helm chart normally
Once you've run the Helm install, you'll want to get the hostnames wired up. Get the accessible DNS hostnames for the
services that the Helm chart exposes (instructions differ based on your cloud provider).
app.host field in your
values.yaml file to point to these respective hostnames. If your cloud provider/LB/ingress
automatically sets up TLS, make sure to change
helm upgrade to apply the changes.
There are a few things to note when setting up the chart!
- The Helm chart spins up an ephemeral datastore with no persistent volumes, see Setting up a persistent DB
- Auth disabled by default. This means anyone who wants to play with Cortex will have access to the dashboard through a “demo” user.
- The chart does not ship with built in TLS/certs, but if deployed into a cluster behind your existing
load balancer with TLS, you just need to change the
protocolvalue in the chart.
Once you have Cortex running, you may want to productionalize it by setting up a persistent DB or SSO.
Cortex requires a Postgres Database (version 9+), and does not support other datastores. We recommend using your standard DB provider, for example a hosted RDS or Cloud SQL instance.
Note You must create (or utilize an existing) instance of Postgres and the database within!
- Create a Postgres Database (9+) with UTF8 encoding that will be accessible from your instance. Be sure to create and note:
- The database instance and database with:
database name, and
- The username and password to access the database
- Set up database user permissions
ALLon the public schema to create/modify tables
DELETEon all tables in the public schema
- This is tied to the default
- The database instance and database with:
- Create a k8s secret which will hold your DB credentials
kubectl create secret generic cortex-secret \
--from-literal DB_HOST="hostname (without jdbc:postgresql:// or the port)" \
--from-literal DB_PORT=5432 \
--from-literal DB_USERNAME=postgres \
--from-literal DB_NAME=postgres \
- (Alternative to Step 1 – creating secret manually) Add your DB credentials to a new secret (keeping in mind k8s secrets are base64 encoded):
- Uncomment the
secretfield in values.yaml, and add the name of the secret you just created
- Remove the
templates/backend/configmap.yamlin the Helm chart
- 2 instances of backend container, with 3.5GB memory per instance (2 cores recommended)
- 1 instance of frontend container, low memory requirement (<500mb is enough, it's a static nginx proxy)
- Postgres DB v9+ with 15GB storage and 4GB memory with max number of connections ≥100. The max number of connections should be ≥2x the backend's connection pool size, which is 50 by default (25 per instance).
Debugging kubernetes and helm charts can be painful at times. Here's a few things that can help along the way
- Try running
helm installwith the
--dry-runflag. This prints out all the configurations without actually going installing the chart. More information on debugging the helm install command can be found here.