Build a Raspberry Pi Kubernetes Cluster



Over the weekend, I built myself a Raspberry Pi Kubernetes cluster. My setup consisted of two Raspberry Pi 4’s and two Raspberry Pi 3’s mounted on this rack I found from Amazon. I used Rancher K3s to create the Kubernetes cluster. K3s is a lightweight Kubernetes installation designed for low resource devices such as the Raspberry Pi. Additionally, I configured MetalLB for the load balancer and Nginx for the Proxy. Here are the steps I took to get it running.

1. Flash Ubuntu 20.04 LTS Server

Using the Raspberry Pi image tool, I flashed the Ubuntu 20.04 LTS (64 bit) on each of the SD cards.

Raspberry Pi Image Tool

After the tool flashes Ubuntu onto the SD card, I removed the card and re-inserted into my computer. A boot drive should be accessible from the card.

Inside of the drive I created a blank file called ssh. The purpose of this file is to enable SSH on the first boot. That way I can do a headless setup without needing to connect my monitor and keyboard. I connected all my Pis to a network switch via ethernet, so I did not bother to configure WiFi network access.

SSH File Ubuntu Raspberry Pi

2. Change Hostname

Once the cards were ready, I inserted them into each Pi, connected the ethernet, and powered them on. Using my router’s admin console, I located the IP address of each Pi and SSH into it using Putty.

Putty SSH

After changing the default password, the next thing to do was to change the hostname. In the past, I ran into issues with each Pi having the same hostname. Also, it makes it easier to identify them. Enter the following commands:

sudo hostnamectl set-hostname mycustomName

You will not see an output from the previous command. Next I edited the /etc/hosts file to add the new hostname:

sudo nano /etc/hosts

Then modified the file to add my new hostname.

127.0.0.1 localhost
127.0.0.1 mycustomName #<-- Add this line to your hosts file

Save and close the file.

Finally, I had to edit the cloud.cfg file since this setup is using the cloud-init package. If the file /etc/cloud/cloud.cfg exists, then your setup also is using the cloud-init package. Edit the file /etc/cloud/cloud.cfg and change preserve_hostname from false to true.

sudo nano /etc/cloud/cloud.cfg

Change the preserve_hostname

# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: true

Save and close the file.

Now to verify it worked, I typed in hostnamectl. The new hostname should appear:

   Static hostname: mycustomName
         Icon name: computer
        Machine ID: 911980eefb464a5fbc0e6c7f9ae0c027
           Boot ID: 22a307d40a8a48e68defa7cf88921e47
  Operating System: Ubuntu 20.04 LTS
            Kernel: Linux 5.4.0-1012-raspi
      Architecture: arm64

At this point, I like to log out and log back into the session. I repeated this process on each of my Raspberry Pis.

3. Change DNS Server

I plan to eventually install Pihole in this cluster. Since Pihole will control the DNS, and will be running on one of the nodes, it makes sense to point each Pi to a public DNS server and not use my router’s gateway as the DNS resolver.

In order to change my DNS, I had to edit the netplan configuration file. When I looked into the /etc/netplan folder I saw a file named: 50-cloud-init.yaml.

Annoyingly, Ubuntu defaults to using the cloud-init configuration. Let’s change that. So first I completely disabled cloud-init by using the following command:

sudo echo "network: {config: disabled}" > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

This creates a file called 99-disable-network-config.cfg with the text network: {config: disabled}.

Then I backed up the 50-cloud-init.yaml to a different file.

cd /etc/netplan
sudo mv 50-cloud-init.yaml 50-cloud-init.yaml.bak

Then I created a new file called network.yaml. I added the following to the file:

network:
    renderer: networkd
    ethernets:
        eth0:
            dhcp4: true
            optional: true
            nameservers:
              addresses: [8.8.8.8, 8.8.4.4]
    version: 2

This changes the renderer to networkd. It also sets the DNS server to Google’s public DNS.

Save and close the file.

Now enter the following to apply the changes:

sudo netplan apply

If you don’t lose connection to your SSH terminal (hopefully), then everything worked. Enter the following to verify the change in DNS servers:

systemd-resolve --status

You should see:

  Current DNS Server: 8.8.8.8
         DNS Servers: 8.8.8.8
                      8.8.4.4
                      192.168.86.1
          DNS Domain: lan

3. Installing K3s Master Node

Next, it was time to install K3s on my master node. I selected my Raspberry Pi with the most RAM to be the master node. Setting up K3s is extremely simple. You just download and run a script. The script can be customized with environment variables. I am going to setup K3s without using traefik or servicelb. This is because I want to replace the traefik with an Nginx proxy, and servicelb with the MetalLB load balancer. Those are the industry standards and I just understand them better than the other two.

Let’s first set a variable for the write permission, so we can run commands without root:

export K3S_KUBECONFIG_MODE="644"

Now, let’s set the installation options as an environment variable:

export INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"

It’s time to actually install K3s. Enter the following:

curl -sfL https://get.k3s.io | sh -

This will setup and install K3s and Kubectl on your Pi.

After a minute, you can verify it’s running by checking its status:

sudo systemctl status k3s

You should see it as running.

‚óŹ k3s.service - Lightweight Kubernetes
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2020-06-23 00:12:38 UTC; 14h ago
       Docs: https://k3s.io
    Process: 1219 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 1229 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1230 (k3s-server)
      Tasks: 122
     Memory: 1.2G
     CGroup: /system.slice/k3s.service

To check to make sure it’s working, enter the following command:

kubectl get nodes -o wide

It should list your master node.

No Dave, I get this error: Error from server (ServiceUnavailable): the server is currently unable to handle the request

I ran into this issue and was banging my head against the wall trying to figure it out. Turns out, it’s a known issue with Ubuntu and K3s at the moment. You can research more if you like, but the simple thing is to modify a boot file.

Edit the following file: /boot/firmware/cmdline.txt

You will see one long line in that file. You want to add the following to that line: cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1. I added this right after the serial portion. So for me, the file has the following contents:

net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc

Save that file and reboot your Pi. You will need to preform this step on each of your Pis unfortunately

4. Setup Kubernetes worker nodes

The next thing to do was to setup the K3s worker nodes. This is pretty simple. First, on your master node, enter the following:

sudo cat /var/lib/rancher/k3s/server/node-token

This will print out the access token you will need to use. Make note of this.

Now on each of your Pis (except the master), use the following commands to setup the worker node:

export K3S_KUBECONFIG_MODE="644"
export K3S_URL="https://192.168.86.157:6443" #This is the IP of your Master node! It might be different!
export K3S_TOKEN="K106edce2a..." #This is the token you copied from above

Finally, download the script to install K3s

curl -sfL https://get.k3s.io | sh -

It will install just as before, except it will be in worker mode since we specified a URL and token in the environment variables. Now let’s verify that it’s running:

sudo systemctl status k3s-agent

It should show as running. On your master node, use the following command to confirm that the worker node has been added:

kubectl get nodes -o wide

The new node will be visible.

5. Install Helm

At this point, my Kubernetes cluster is all setup. I have a total of 4 nodes connected. One master node and three worker nodes. Now, let’s get to setting up that Nginx and MetalLB that I mentioned earlier; without these, the cluster will be pretty much useless to me.

Helm is a package management tool for Kubernetes. It makes installing workloads and deployments very easy. We will use helm to deploy Nginx and MetalLB. On your master node, enter the following to download and install helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Helm should now be installed.

6. Install MetalLB

Let’s go ahead and install MetalLB. Enter in the following, but change the address range to match your network’s address pool:

helm install metallb stable/metallb --namespace kube-system \
  --set configInline.address-pools[0].name=default \
  --set configInline.address-pools[0].protocol=layer2 \
  --set configInline.address-pools[0].addresses[0]=192.168.86.240-192.168.86.250

After a few moments, you can check to see if the services are running:

kubectl get pods -n kube-system -l app=metallb -o wide

The pods should be running:

NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
metallb-speaker-g7dnt                 1/1     Running   0          13h   192.168.86.157   jupiter   <none>           <none>
metallb-speaker-qnf42                 1/1     Running   0          13h   192.168.86.158   mars      <none>           <none>
metallb-controller-6655c976c5-r8j69   1/1     Running   0          13h   10.42.1.2        saturn    <none>           <none>
metallb-speaker-6svgp                 1/1     Running   0          13h   192.168.86.156   saturn    <none>           <none>
metallb-speaker-jbmbq                 1/1     Running   0          13h   192.168.86.159   mercury   <none>           <none>

7. Install Nginx

Next, let’s go and install Nginx. Enter in the following:

helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
    --set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm \
    --set controller.image.tag=0.32.0 \
    --set controller.image.runAsUser=101 \
    --set defaultBackend.enabled=false

After a couple of moments, check to see if the service is available:

kubectl get services  -n kube-system -l app=nginx-ingress -o wide

You should see something like the following:

NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE   SELECTOR
nginx-ingress-controller   LoadBalancer   10.43.52.39   192.168.86.240   80:31638/TCP,443:32205/TCP   12h   app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress

Open a browser to http://192.168.86.240. If you see an empty Nginx 404 page, then everything is working.

That’s it

At this point, you should match what I have. A Raspberry Pi Kubernetes cluster using K3s with Nginx and MetalLB. From this point, you are ready to start deploying workloads and services. My next point of action will be setup an NFS server and a NFS provisioner for storage.

Bonus: Setting up NFS Provisioner

I didn’t feel like making a separate post for this, so just gonna document it here. I just configured an NFS provisioner for my Kubernetes cluster. I grabbed an extra Pi (I have so many of them) and set up an NFS server on it. This Pi is not part of my cluster (not a worker node) and is running plain 32 bit Raspbian. Also, there was really no need to change the DNS from the default like I did with the other Pis.

There are lots of tutorial on how to setup an NFS server on the Raspberry Pi (very easy, just install a couple of packages and edit a conf file) so I’m not going to go over that here.

There is one little gotcha in the process, you need to make sure that you install the NFS client tools on all of the nodes in your cluster (master and workers). If you don’t, you will run into issues when a pod tries to deploy that requires a PVC. On every node install the following:

sudo apt-get install nfs-common -y

Now, on your master node install the nfs-provisioner helm chart. /mnt/nfsshare is the path of my NFS server that I am exposing; yours might be different. 192.168.86.160 is the IP address of my NFS server.

helm install nfs-provisioner --set nfs.server=192.168.86.160 --set nfs.path=/mnt/nfsshare --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm stable/nfs-client-provisioner --namespace nfs

Wait a few moments and check to see if the pod is running in the nfs namespace.

NAME                                                      READY   STATUS    RESTARTS   AGE
nfs-provisioner-nfs-client-provisioner-6dd6895fd4-bt8kg   1/1     Running   0          12d

Now create a test PVC. It’s important that any PVC you deploy that you set the storage class to “nfs-client” so that Kubernetes will use the NFS provisioner.

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Mi

After you apply it, run:

kubectl get pvc 

You should see:

NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-3b163a2c-5d36-4de9-b288-8f6ddd285304   5Mi        RWX            nfs-client     12d

That’s it! Now you are all set up and ready to deploy workloads with storage capabilities.


See also