Ubuntu 20.04 Raspberry Pi Cluster

I thought I’d have some fun in December, setting up a Raspberry Pi cluster for running Kubernetes.

My goal was to automate as much of the process of setting up each node of the cluster as possible, as I’d read several blog posts on the subject and most require many manual steps having to be repeated on each node, and since I value repeatable processes I enjoyed the challenge of figuring it out.

I’m assuming a certain level of knowledge for readers of this post, so I’m not spelling out every step needed.

There are many posts on the internet which do a much better job of explaining things; for example, checkout this post Make Ubuntu server 20.04 boot from an SSD on Raspberry Pi 4 by Zakaria Smahi.


First off, the inventory of components I am using:


Since the Raspberry Pi 4 supports booting off an external drive via USB, I only purchased one SD Card, which will be needed to boot each Pi in order to enable booting from USB.

I’m planning to run Kubernetes on my cluster, so it is recommended to run off Solid State Drives as Kubernetes is disk heavy and the performance and lifetime of SSD’s is considerably better than that of an SD card.

EDIT: I had to update the firmware of the SSD drive enclosures to solve a slow boot issue. The updated firmware can be found on the Sabrent website and seaching for the EC-UASP model. I had to use a Windows computer to perform the firmware update.

Disk Images


I followed these steps to setup my cluster from my Ubuntu laptop.

Enabling USB Boot

Booting the Raspberry Pi off USB isn’t enabled by default; enabling requires changing the boot order to first attempt USB followed by the SD card.

Flash the SD card with the Raspberry Pi OS Lite operating system. This article explains how to install the Raspberry Pi operating system image on an SD card.

SSH needs to be enabled so headless installation is possible. After flashing the SD card, mount it and create an empty file called ssh in the “boot” partition. Unmount and eject the SD card when done.

Insert the SD card into the first Raspberry Pi node and switch it on.

Figure out it’s IP address and connect via SSH from your PC. I used nmap to figure out the IP address; given my home network is in the 192.168.0.x range, I limited the search to addresses between 1 and 254.

$ nmap

Starting Nmap 7.80 ( https://nmap.org ) at 2020-12-22 17:21 GMT


Nmap scan report for raspberrypi (
Host is up (0.0094s latency).
Not shown: 997 closed ports
22/tcp  open  ssh


In my case, the IP was and the default username is pi and password is raspberry.

$ ssh pi@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:XxXxXXXxxx/ZZzyyyzZZZxxxXXxyyYYYZZzZzZZxxYy.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
pi@'s password:
Linux raspberrypi 5.10.0-v7l+ #1382 SMP Tue Dec 15 18:23:34 GMT 2020 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Dec 22 18:57:21 2020

SSH is enabled and the default password for the 'pi' user has not been changed.
This is a security risk - please login as the 'pi' user and type 'passwd' to set a new password.

Wi-Fi is currently blocked by rfkill.
Use raspi-config to set the country before use.

pi@raspberrypi:~ $

Once you have an SSH terminal onto to first Raspberry Pi node you can configure the boot order.

See Raspberry Pi 4 bootloader configuration for how it is done.

Or use the raspi-config utility to configure via the console user-interface.

sudo raspi-config

Power off the node, running sudo poweroff, remove the SD card and repeat these steps for each of the nodes.

Provisioning Ubuntu

The following configuration files and scripts are required to provision the SSD drive for each Raspberry Pi node.

Create a directory on your PC and create each file with the following content, or clone this gist

Make sure you have downloaded the Ubuntu disk image file from requirements above, and placed it in the same directory as the provision script, so your file layout should look like:

$ tree
├── 999_decompress_rpi_kernel
├── auto_decompress_kernel
├── network-config
├── provision
├── ubuntu-20.04.1-preinstalled-server-arm64+raspi.img.xz
├── user-data
└── usercfg.txt

0 directories, 7 files


After inserting the SSD drive you will need to figure out it’s block device name using lsblk, since the provision script will flash the SSD with the Ubuntu image, so you must be sure you have the correct device name.

For example, on my computer it is sdb but it might be different for you!

You can use lsblk to figure out which device name is correct.

$ lsblk

loop0                    7:0    0 146.6M  1 loop  /snap/code/51
loop1                    7:1    0 143.8M  1 loop  /snap/code/52
sda                      8:0    0 238.5G  0 disk
├─sda1                   8:1    0   487M  0 part  /boot
├─sda2                   8:2    0     1K  0 part
└─sda5                   8:5    0   238G  0 part
  └─sda5_crypt         253:0    0   238G  0 crypt
    ├─ubuntu--vg-root  253:1    0   230G  0 lvm   /
sdb                     11:0    0 128.0G  0 disk

The provision script requires providing the block device name, hostname for the node and the IP suffix.

I am going to have a 4 node Raspberry Pi cluster running Kubernetes, so I settled on host naming convention of rpi-k8s-<role>-<number> for my nodes, where <role> is the role of the node; either “server” or “agent”; and <number> is the instance number; starting from 001 to 999.

I decided to have my server node have the IP, followed by, and for each agent node.

Note: The Ubuntu image filename and password can be overridden by setting the IMAGE and PASSWORD environment variables before running the provision script.


export IMAGE=ubuntu-20.04.1-preinstalled-server-arm64+raspi.img.xz
export PASSWORD=p@ssw0rD

Run the provision script with the following arguments for the server node:

./provision sbd rpi-k8s-server-001 100

I intend to run an HA server in future, so having the number 001 for the first server node makes it consistent for when I add the second server with 002.

And then for each agent node (inserting and ejecting each respective SSD in between):

./provision sbd rpi-k8s-agent-001 101
./provision sbd rpi-k8s-agent-002 102
./provision sbd rpi-k8s-agent-003 103

Finally, connect the SSD drives to each Raspberry Pi node and power them on.

Each node will automatically provision itself and after some time, you will be able to SSH onto them using the k8s user.


ssh k8s@
ssh k8s@
ssh k8s@
ssh k8s@

Next, I’ll write a post on installing Kubernetes on the cluster.