Kubernetes Secret Management

Most apps consume secret data (e.g. API keys, database passwords etc.). We explored managing configuration in the first part of this series using configmaps. However, configmaps are meant
for storing non-sensitive configuration data because they are unencrypted at rest and usually are set by a yaml file, which would likely be checked into source control.

Storing secrets in environment variables is a common practice, but is probably worse than hard-coding them in source code. Why? Because not only are they unencrypted, they are available to the process and subprocesses running your app. Any third-party package has access to the process' environment. Often packages dump the entire envrionment for debugging purposes, which leaks your secrets to server logs. Lastly, when using environment variables, you have no access control, no way to audit access and no way to revoke permissions to access secrets.

k8s has native facilities for secret management, so why not use that? It turns out that, by default, k8s does not encrypt secrets at rest. You can configure k8s to do so, but it's complicated AND the encryption key is stored unencrypted. Because of this, you'd have to get the key from a dedicated secret manager like Hashicorp's Vault anyway, so why not use that (or a similar product) in the first place?

That's what we're going to do in this post: set up a high availability (ha), vault instance in our local cluster.

Make sure you have a Docker daemon and minikube running, and the sample app's pods are deployed. Additionally install jq (for parsing JSON) and wget, if they are not already installed. You should be in the state you were in when finishing the last post in this series.

Installing Helm, Consul & Vault

We'll need to install a few prerequisites. Let's start with helm. Helm is a kind of package manager for k8s. It allows developers to package up their manifests so others can easily deploy vetted functionality into their clusters. These packages are called "charts." Once helm is installed, we're going to install consul, which is a service mesh that vault will use as a back-end. We'll also install vault itself which handles managing the secrets.

First let's install helm on our host. I assume you are on a Mac. If you're on Linux or Windows, you'll have to do a bit of research:

$ brew install helm

Now we'll add the hashicorp helm repo. Hashicorp is responsible for consul and vault:

$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm repo update

Next, we'll install consul:

$ wget https://raw.githubusercontent.com/kahunacohen/hello-k8s/3/vault/helm-consul-values.yaml
$ helm install consul hashicorp/consul --values helm-consul-values.yaml

And finally, we install vault:

$ wget https://raw.githubusercontent.com/kahunacohen/hello-k8s/3/vault/helm-vault-values.yaml
$ helm install vault hashicorp/vault --values helm-vault-values.yaml

Once we're done installing these charts, a few new pods should start in our cluster. Wait a few minutes
and you should see something like this:

$ kubectl get pods
NAME                                    READY   STATUS      RESTARTS   AGE
consul-consul-fpzwg                     1/1     Running     1          5d18h
consul-consul-server-0                  1/1     Running     1          5d18h
print-hello-27150680-tzl5j              0/1     Completed   0          7m15s
print-hello-27150685-mfx2j              0/1     Completed   0          2m15s
vault-0                                 0/1     Running     1          5d18h
vault-1                                 0/1     Running     1          5d18h
vault-2                                 0/1     Running     1          5d18h
vault-agent-injector-79d479cf7d-v2fq2   1/1     Running     3          5d18h
web-deployment-66968775cb-8td6c         1/1     Running     1          19h
web-deployment-66968775cb-ljp9w         1/1     Running     1          19h

What we see here is our app's deployment pods, a few of our cronjobs, two pods related to consul and three redundant vault pods. We can also see some other objects that were automatically created for us
from the helm charts. For example, we have a k8s service account for vault, which will be used to access the vault API:

$ kubectl get serviceaccounts vault
NAME    SECRETS   AGE
vault   1         5d18h

Initialize & Unseal the Vault

Now that we have our consul and vault pods running, we need to initialize and unseal the vault. We'll start by initializing it. Exec into the first vault pod and call vault operator init:

$ kubectl exec vault-0 -- vault operator init -key-shares=2 -key-threshold=2 -format=json > cluster-keys.json

This initializes vault with two key-shares, which means two keys will have to be entered to unseal the vault for use. It also writes the keys to unseal the vault to cluster-keys.json. Note: writing the keys like this to the file system is super insecure, and we're only doing this for demo purposes. In a real vault, you'd never write the keys like this.

The vault starts sealed because everything in it is encrypted. It has storage facilities, but doesn't yet know how to decrypt what is in it. Unsealing is a secure process that uses Shamir's Secret Sharing algorithm. This involves splitting up the key into different shards (key shares). Each shard must be entered by an operator until a threshold is met and the vault can be unencrypted. The shards can be entered in any order, at different times, from different machines. For demonstration purposes, we are going to enter all the shards at one time and from one machine. In a production vault, doing this from one machine defeats the purpose of the shards because all they keys to unseal the vault could be available from the command-line history.

Exec into the first vault pod to unseal it. Take the first key in cluster-keys.json::unseal_keys_hex and use it to start unsealing:

 $ kubectl exec vault-0 -- vault operator unseal FIRST_HEX_KEY
 ...
 $ kubectl exec vault-0 -- vault operator unseal SECOND_HEX_KEY
 ...

Do this for each vault pod.

Access the Vault Server

We'll need to expose the vault API server outside the vault pod. In a separate terminal do:

$ kubectl port-forward --address=127.0.0.1 vault-0 8200:8200

Enable & Configure Secrets

Now we'll enable key/value version 2 secrets engine in vault (kv2). Exec and login to the vault-0 pod using the root vault key. Again, in a production environment we'd want to be more careful with our root token. Get the vault, root token from cluster-keys.json, then:

kubectl exec -it vault-0 -- sh
# From this point on you should be execed into your vault pod...
$ vault login ROOT_TOKEN
...
$ vault secrets enable -path=secret kv-v2
...

Let's add a few secrets. Notice that we can add two at a time:

$ vault kv put secret/webapp/config username="static-user" password="static-password";

Configure Authorization

Next we need to give the vault service account authorization to login to the Kubernetes API and access secrets. You may wonder why we can't do all this declaratively with yaml files, like we can with k8s. That way we could commit vault configuration as code. Actually, I'm not sure I know the answer to that. I ended up writing shell scripts to set this up in a reproducible way, but it's ugly. Another idea is to roll your own, more declarative process, as illustrated here. In the meantime, we'll further configure vault command-by-command while logged in to the vault pod.

First, let's enable k8s authorization on our cluster. There are different kinds of authorizations, of which k8s is just one kind. While still logged in to the vault-0 pod, do this:

$ vault auth enable kubernetes

We also need to configure the authorization:

$ vault write auth/kubernetes/config \
      token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
      kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
      kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
      issuer="https://kubernetes.default.svc.cluster.local"

This sets:

  1. the JWT to the service account token.
  2. the API server host
  3. the cert
  4. the issuer for the JWT token so that vault can ensure the JWT was issued by the vault helm chart

Now, we need to create a policy, which determines what actions can be taken at the secret path:

$ vault policy write webapp - <<EOF
path "secret/data/webapp/config" {
  capabilities = ["read"]
}
EOF

And finally, we create a role which binds the policy to the vault service account:

$ vault write auth/kubernetes/role/webapp \
      bound_service_account_names=vault \
      bound_service_account_namespaces=default \
      policies=webapp \
      ttl=24h

Bind Service Account to App Pod

Our last step is to modify our app pod (via the deployment.yaml), adding the vault service account. Doing this determines which service account's JWT token resides at /var/run/secrets/kubernetes.io/serviceaccount/token. By default this would be the default k8s service account, but we want it to be the vault service account. We also need to add some vault related environment variables to our pod. We add this to the pod's spec.spec block:

serviceAccountName: vault

and this to the spec.spec block:

env:
  - name: VAULT_ADDR
    value: 'http://vault:8200'
  - name: JWT_PATH
    value: '/var/run/secrets/kubernetes.io/serviceaccount/token'
  - name: SERVICE_PORT
    value: '8080'

So our final web-deployment.yaml file should look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-pod
  template:
    metadata:
      labels:
        app: web-pod
    spec:
      serviceAccountName: vault
      containers:
      - name: web
        image: kahunacohen/hello-k8s
        imagePullPolicy: IfNotPresent
        env:
          - name: VAULT_ADDR
            value: 'http://vault:8200'
          - name: JWT_PATH
            value: '/var/run/secrets/kubernetes.io/serviceaccount/token'
          - name: SERVICE_PORT
            value: '8080'
        envFrom:
          - configMapRef:
              name: web-configmap
        ports:
        - containerPort: 3000
          protocol: TCP

Log out of the vault pod with CONTROL-C.

Get Secrets from the Client

Our final step is getting the secret from the client code. Most languages will have various libraries for interacting with vault. For example nodejs has node-vault or hashi-vault. I don't think it's worth using a third-party library when it's a simple matter of making HTTP requests to the vault server.

Here's something to get you started in JavaScript. It uses the popular axios http library. Install axios:

$ npm i axios

From the client's point of view getting secrets is a several part process:

  1. Get the vault service account JWT token, which we'll use to ask vault for a vault token.
  2. Pass the vault token to the vault API endpoint for our secret path. We should get back
    our secrets from there.

I'm also adding a function to do a health check for the vault server.

For now, let's write our vault functions in server.js:

const axios = require('axios').default;

...
function Vault() {
  const axiosInst = axios.create({baseURL: `${process.env.VAULT_ADDR}/v1`});
  const getHealth = async () => {
    const resp = await axisoInst.get(`/sys/health?standbyok=true`);
    return resp.data;
  }

  const getAPIToken = () => {
    return fs.readFileSync(process.env.JWT_PATH, {encoding: "utf-8"});
  }
  const getVaultAuth = async (role) => {
    const resp = await axiosInst.post("/auth/kubernetes/login", {jwt: getAPIToken(), role});
    return resp.data;
  }
  const getSecrets = async (vaultToken) => {
    const resp = await axiosInst("/secret/data/webapp/config", {headers: {"X-Vault-Token": vaultToken}});
    return resp.data.data.data;
  }
  return {
    getAPIToken,
    getHealth,
    getSecrets,
    getVaultAuth
  }
}

If you were writing these functions "for real," you'd probably not want to get the vault token each time, Instead you'd try to remember the vault token, use it, and if it's expired, get a new one.

In any case, all that's left now is to call this code and get the secrets. Here's is the completed server.js with the beginnings of our own vault JavaScript library and the amended route that now prints our secrets:

const process = require("process");
const express = require("express");
const app = express();
const fs = require("fs");
const axios = require('axios').default;

function Vault() {
  const axiosInst = axios.create({baseURL: `${process.env.VAULT_ADDR}/v1`});
  const getHealth = async () => {
    const resp = await axisoInst.get(`/sys/health?standbyok=true`);
    return resp.data;
  }

  const getAPIToken = () => {
    return fs.readFileSync(process.env.JWT_PATH, {encoding: "utf-8"});
  }
  const getVaultAuth = async (role) => {
    const resp = await axiosInst.post("/auth/kubernetes/login", {jwt: getAPIToken(), role});
    return resp.data;
  }
  const getSecrets = async (vaultToken) => {
    const resp = await axiosInst("/secret/data/webapp/config", {headers: {"X-Vault-Token": vaultToken}});
    return resp.data.data.data;
  }
  return {
    getAPIToken,
    getHealth,
    getSecrets,
    getVaultAuth
  }
}

const vault = Vault();
app.get("/", async (req, res) => {
  const vaultAuth = await vault.getVaultAuth("webapp");
  const secrets = await vault.getSecrets(vaultAuth.auth.client_token);
  res.send(
  `<h1>Kubernetes Expressjs Test settings</h2>
  <h2>Non-Secret Configuration Example</h2>
  <p>This uses ConfigMaps as env vars.</p>
  <ul>
    <li>MY_NON_SECRET: "${process.env.MY_NON_SECRET}"</li>
    <li>MY_OTHER_NON_SECRET: "${process.env.MY_OTHER_NON_SECRET}"</li>
  </ul>
  <h2>Secrets</h2>
  <ul>
    <li>username: ${secrets.username}</li>
    <li>password: ${secrets.password}</li>
  </ul>
  `);
});

app.listen(3000, () => {
  console.log("Listening on http://localhost:3000");
});

Conclusion

There you have it: a high availability secret management vault server deployed to a local cluster with (relative) minimum fuss. The code can be found here. I created this shell script to easily set up the vault server from scratch. It relies on another script in the same directory. It's not well tested and the sleeps should be replaced by polling code, but you get the idea.

There's way more to explore with vault including the UI which you can reach from http://localhost:8200 (once we forward the port as per above), leases, auditing, revolving secrets etc., but I'll leave that for another post.

In our next post we'll explore setting up a database in our cluster.