We are constantly deploying Kubernetes clusters, during the development phase as well as in our CIs for end-to-end testing, and each deployment takes a while. Most of the time we don’t really need a full Kubernetes deployment, we can do with something more lightweight, so we try to optimize the deployment time with tools like Minikube and MicroK8s, but there’s always people working on faster ways to deploy Kubernetes, and that’s what this post is about.
Kind
During the Barcelona KubeCon + CloudNativeCon I was introduced to the Kind project in the Testing SIG Deep Dive session by project maintainers Benjamin Elder and James Munnelly, and even during the presentation I felt like I couldn’t wait to try it for testing CSI plugins.
Kind is a tool for running local Kubernetes clusters using Docker container “nodes” and it’s primarily designed for testing Kubernetes.
The project has been around for less than a year and its initial target is the conformance tests, but even if their latest release is v0.3, don’t let that fool you, it works very well.
Installation
There are multiple ways to install and use Kind, so I’ll focus primarily on how I like to do it, since it sometime diverges from the documentation recommendations.
The only requirement to run Kind is to have Docker installed and running on our machine. For CentOS we can install it from the extras repository:
$ sudo yum install -y docker
$ sudo systemctl --now enable docker
Or, if we want a newer version, we can use Docker’s registry:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install -y docker-ce docker-ce-cli containerd.io
$ sudo systemctl --now enable docker
We can allow the current user to run Docker so we don’t need to run docker commands as root, although there are security concerns. If you decide you are OK with those risks you can just enable it:
$ sudo chown root:docker /var/run/docker.sock
$ sudo groupadd docker
$ sudo gpasswd -a $USER docker
$ newgrp docker
Now we need to install Kind itself, which I usually do manually:
$ sudo curl -Lo /usr/local/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.3.0/kind-linux-amd64
$ sudo chmod +x /usr/local/bin/kind
We could also install it using go, but since the recommended go version is 1.12.5 or greater, and my testing VMs don’t usually have it installed, I prefer the manual procedure. The go command to install it is:
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.3.0.
Now we can confirm that we are ready to run everything:
$ docker ps
$ kind version
Minimal deployment
A minimal Kubernetes deployment contains a single master node that runs the whole control-plane and doesn’t have the taints that prevent workloads from running in it, so we can run our containers there without needing to add tolerations to our manifests.
This minimal deployment is Kind‘s default, so we can create it running:
$ kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.14.2) 🖼
✓ Preparing nodes 📦
✓ Creating kubeadm config 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
Which will take a little bit longer the first time we run it as it has to download the image which is kind of big (it contains other images in it).
Once the kind
command has finished we can see that we have just one container running for the control-plane:
$ docker ps -f "label=io.k8s.sigs.kind.cluster"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55c034371a36 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 6 minutes ago Up 6 minutes 41196/tcp, 127.0.0.1:41196->6443/tcp kind-control-plane
Now that the cluster is up we we can start running kubectl
commands. The command recommends doing:
$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
$ kubectl get nodes # This is just an example command
But I personally don’t like to do it that way, because then I need to have kubectl
installed on the host, and it could be a different version from the one in the deployment. So I usually go with one of 2 ways:
Running in the container from the host:
$ alias k="docker exec -it kind-control-plane kubectl --kubeconfig /etc/kubernetes/admin.conf"
$ k get nodes # This is just an example command
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 11m v1.14.2
Going in the master container and running it from there:
$ docker exec -it kind-control-plane /bin/bash
$ alias k="kubectl --kubeconfig /etc/kubernetes/admin.conf"
$ k get nodes # This is just an example command
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 11m v1.14.2
It is true that using the host’s kubectl
has the benefit of having immediate access to all the files on the host, but we can always pass the contents on the command using cat
or, as we’ll see in the next section, we can share the directory with all our manifests with the container.
By default Kind will not wait until the master nodes are ready, but we can use the --wait
argument to let Kind know that we want to wait and the maximum time we are willing to wait. For example to wait 1 minutes:
$ kind create cluster --wait 1m
Once we don’t need the cluster anymore we can just delete it:
$ kind delete cluster
Beyond the basics
Let’s have look at some of the other features available in Kind.
Multiple clusters
During development we’ll probably be fine with just one cluster, but we’ll definitely need more than one if we want to use Kind for our CI’s end-to-end testing, and here is where we’ll want to use a different name instead of the default kind name.
Many kind commands support the --name
argument to specify a cluster name, and those that accept it will default to kind if not provided, so be careful when when deleting your clusters.
$ kind create cluster --name my-kind
Creating cluster "my-kind" ...
✓ Ensuring node image (kindest/node:v1.14.2) 🖼
✓ Preparing nodes 📦
✓ Creating kubeadm config 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="my-kind")"
kubectl cluster-info
$ kind get clusters
my-kind
kind
$ kind get nodes
kind-control-plane
$ kind get nodes --name my-kind
my-kind-control-plane
$ docker ps -f "label=io.k8s.sigs.kind.cluster"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67521e5f1cb1 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 6 minutes ago Up 6 minutes 41350/tcp, 127.0.0.1:41350->6443/tcp my-kind-control-plane
55c034371a36 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 41 minutes ago Up 40 minutes 41196/tcp, 127.0.0.1:41196->6443/tcp kind-control-plane
$ docker ps -f "label=io.k8s.sigs.kind.cluster=my-kind"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67521e5f1cb1 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 7 minutes ago Up 7 minutes 41350/tcp, 127.0.0.1:41350->6443/tcp my-kind-control-plane
$ alias k="docker exec -it my-kind-control-plane kubectl --kubeconfig /etc/kubernetes/admin.conf"
$ k get node
NAME STATUS ROLES AGE VERSION
my-kind-control-plane Ready master 10m v1.14.2
Multi-node
The amount of testing we can do with a single Kubernetes node is very limited, that’s why Kind supports deploying multiple nodes, be it control-plane or worker nodes.
When deploying multiple worker nodes the master nodes will taint themselves to not schedule workloads, so only worker nodes will accept the creation of pods by default.
To define the nodes we’ll need to create a YAML file with our configuration. For example for a 2 master nodes and 3 worker nodes we can create a file called kind.yaml with the following contents:
# Contents of kind.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
And then use the --config
parameter when creating the cluster:
$ kind create cluster --name multinode --config kind.yaml
Creating cluster "multinode" ...
✓ Ensuring node image (kindest/node:v1.14.2) 🖼
✓ Preparing nodes 📦📦📦📦📦📦
✓ Configuring the external load balancer ⚖️
✓ Creating kubeadm config 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining more control-plane nodes 🎮
✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="multinode")"
kubectl cluster-info
$ kind get nodes --name multinode
multinode-external-load-balancer
multinode-control-plane2
multinode-control-plane
multinode-worker
multinode-worker3
multinode-worker2
$ docker ps -f "label=io.k8s.sigs.kind.cluster=multinode"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a176fcec23fa nginx:1.15.12-alpine "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes 80/tcp, 43651/tcp, 127.0.0.1:43651->6443/tcp multinode-external-load-balancer
9290b31c721c kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 5 minutes ago Up 3 minutes 35622/tcp, 127.0.0.1:35622->6443/tcp multinode-control-plane2
153e866bc016 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 5 minutes ago Up 3 minutes 38661/tcp, 127.0.0.1:38661->6443/tcp multinode-control-plane
8dd4279200d2 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 5 minutes ago Up 3 minutes multinode-worker
0dfeb5a2b548 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 5 minutes ago Up 3 minutes multinode-worker3
a9c6308108e4 kindest/node:v1.14.2@sha256:33539d830a6cf20e3e0a75d0c46a4e94730d78c7375435e6b49833d81448c319 "/usr/local/bin/en..." 5 minutes ago Up 3 minutes multinode-worker2
$ alias k="docker exec -it multinode-control-plane kubectl --kubeconfig /etc/kubernetes/admin.conf"
$ k get nodes
NAME STATUS ROLES AGE VERSION
multinode-control-plane Ready master 5m21s v1.14.2
multinode-control-plane2 Ready master 4m29s v1.14.2
multinode-worker Ready <none> 3m22s v1.14.2
multinode-worker2 Ready <none> 3m22s v1.14.2
multinode-worker3 Ready <none> 3m22s v1.14.2
As we can see from the output, when Kind creates multiple master nodes it will automatically deploy the load balancer.
Once we have created the cluster we don’t need to reference the configuration file in any other command, not even when deleting the cluster
$ kind delete cluster --name multinode
Deleting cluster "multinode" ...
Sharing directories
Just like in any other Docker container, Kind can share directories from the host into it’s containers on a per-node basis using the configuration file.
A convenient way to do it would be to share the directory with our manifests into the control-plane node in a way that the path to our manifests in the host is the same as the one in the container, taking into account that the initial directory in the container is /
.
For example, if we were in /home/vagrant
and our manifests were under /home/vagrant/shared
, then we could create a Kubernetes cluster with 1 worker and 1 master that has our manifests under /shared
with the following kind.yaml configuration file:
# Contents of kind.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
extraMounts:
- containerPath: /shared
hostPath: /home/vagrant/shared
propagation: Bidirectional
- role: worker
Then we would create the cluster an easily use our manifests from the host:
$ kind create cluster --name extramounts --config kind.yaml
Creating cluster "extramounts" ...
✓ Ensuring node image (kindest/node:v1.14.2) 🖼
✓ Preparing nodes 📦📦
✓ Creating kubeadm config 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="extramounts")"
kubectl cluster-inf
$ alias k="docker exec -it extramounts-control-plane kubectl --kubeconfig /etc/kubernetes/admin.conf"
$ k create -f shared/manifest.yaml # shared/manifest.yaml is under /shared/manifest.yaml inside the container
serviceaccount/csi-rbd created
clusterrole.rbac.authorization.k8s.io/csi-rbd created
clusterrolebinding.rbac.authorization.k8s.io/csi-rbd created
Customizing deployments
There are times we’ll want to change how Kind deploys the Kubernetes cluster, for example to enable feature gates, which requires changes to how the API, Scheduler, and Manager services are started.
Kind uses kubeadm for the deployment, and exposes kubeadm’s configuration file via the kubeadmConfigPatches
and kubeadmConfigPatchesJson6902
keys.
Since it can be tricky to go through the Kind types, API docs, the kubeadm’s types, control plane flags, API options, Scheduler options, Manager options, and Kubelet types, I’ll leave you here an example of how we can enable multiple feature gates with changes to the API, Scheduler, Manager, and Kubelet:
# Contents of kind.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
feature-gates: "BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true,VolumeScheduling=true"
scheduler:
extraArgs:
feature-gates: "BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true,VolumeScheduling=true"
controllerManager:
extraArgs:
feature-gates: "BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true,VolumeScheduling=true"
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
featureGates:
CSIBlockVolume: true
VolumeSnapshotDataSource: true
CSINodeInfo: true
KubeletPluginsWatcher: true
VolumeScheduling: true
BlockVolume: true
Loading images
You will probably want to test your own local images on this Kubernetes deployment, and Kind makes it easy to do with the kind load
command that allows loading a Docker images from our local registry as well as .tar files.
This allows us to have a simple flow of building the image, uploading it to the cluster, and deploying it.
Here are 2 examples of uploading an image to the default kind cluster.
$ kind load docker-image my-image:tag
$ kind load image-archive my-image.tar
Be careful with using the latest tag, as it will set the default pull policy to Always
as mentioned in the Kubernetes imagePullPolicy.
More
These are not all the features available in Kind, they are just the ones I used during my testing.
Other features include:
- Exporting logs: Necessary for CI systems.
- Using a proxy: If your env needs a proxy.
- Building the base image: To update your Kubernetes cluster to the latest code in master or to add auxiliary container images in the Kind image.
- Using Private registries: Because unfortunately not everything is Open Source. 😉
Caveats
I must admit that during my my research to use Kind for Ember-CSI development and testing, I only found 1 issue, though it was a real deal breaker for me.
Since Kind runs systemd inside the Docker container, it requires /sys to be mounted as read only, as explained in this Container Interface article.
It also appears to have problems sharing /dev between the host and the container image.
Unfortunately Ember-CSI testing really needs both of these to be read-write and be mapped to the actual host.
Conclusion
Kind is an amazing lightweight deployment tool that can speed up your application development and CI jobs, and if you are doing user space application development you should definitely check it out, but if you are developing infrastructure applications that require /sys and /dev, then Kind is not for you.