K3s High Availability (HA) Cluster on the Grid with Tailscale
Table of Contents
- Introduction
- The ThreeFold Grid
- K3s, Tailscale and HA
- K3s Cluster with Tailscale
- Set Up the Nodes
- Set Up Tailscale
- Set Up the Cluster
- Deploy Nginx
- Cleaning Up
- Optimizing the Deployment
- Conclusion
Introduction
In this short guide, I want to share some information on how someone could deploy a HA cluster (high availability cluster) on the ThreeFold Grid using nothing but open-source projects and codes.
This is strictly for educational purpose, and you should make sure to properly investigate and research HA before doing anything at the production level.
While this guide is written for the ThreeFold Grid, it doesn’t need to be deployed on the grid. You could deploy the HA cluster on local nodes.
The ThreeFold Grid
The ThreeFold Grid is an extremely accessible platform to deploy workloads across the world. The Dashboard contains one-click deployments such as Wordpress, Nextcloud, Full VM and more.
To learn more about the ThreeFold Grid, read the manual: https://manual.grid.tf.
K3s, Tailscale and HA
K3s is a lightweight Kubernetes distribution optimized for resource-constrained environments. It simplifies deployment with a single binary and uses SQLite instead of etcd, maintaining compatibility with standard Kubernetes while reducing overhead.
Tailscale is a zero-config VPN using WireGuard. Its mesh network architecture allows direct device connections, simplifying network management and providing secure access regardless of location. It integrates with existing identity providers and supports various platforms.
High availability (HA) refers to a system designed to minimize downtime and maintain continuous operation even in the event of component failures. This is achieved through redundant components and failover mechanisms that automatically switch to backup systems when primary ones become unavailable. Techniques used to achieve HA include redundant servers, clustered databases, load balancers, and geographically diverse data centers. Implementing HA requires careful planning, monitoring, and testing to ensure the system can seamlessly handle failures and maintain service continuity.
K3s Cluster with Tailscale
As a basic proof-of-concept, I provide some basic scripts available on GitHub. With those scripts, you can deploy a HA cluster of any given number of worker and control plane nodes.
Let’s have an example with a 3 worker nodes, 3 control planes cluster. The overall steps are simple:
-
Use a local Ubuntu computer for the deployment
-
Docker can be used if you don’t have a local Ubuntu computer
-
Deploy 6 full VMs on the TFGrid (Ubuntu 24.04)
-
Set Tailscale on the 6 VMs
-
Run the HA cluster deployment script
-
Run the nginx app script to test the basic functionalities
Set Up the Nodes
You can deploy the 6 full VMs on the grid using the Dashboard, Pulumi, Terraform, or any other methods that will deploy on the grid.
For more information, consult the manual.
Set Up Tailscale
You can set up Tailscale manually on all nodes, or you can also use the Tailscale script available here: https://github.com/mik-tf/tscluster.
Note that this script is made for Ubuntu.
Set Up the Cluster
Once the 6 VMs are deployed on the grid, you can launch the HA cluster. To do so, you can deploy manually the cluster, or you can use the K3s cluster script available here: https://github.com/mik-tf/k3scluster.
Simply clone the repository and run the cluster with make cluster
.
As explained in the script, you will need to update the file ha_cluster.txt
with the VMs information, for example:
control plane nodes:
node1@100.121.222.20
node2@100.112.102.15
node3@100.97.250.102
worker nodes:
node4@100.116.163.44
node5@100.66.77.13
node6@100.67.182.33
Deploy Nginx
Once the Kubernetes cluster is properly deployed, you can test the cluster by running an Nginx app. You can use the script available at the same k3scluster repo above.
You can run the script with make app-nginx
.
This app serves as a basic test of the Kubernetes cluster.
Cleaning Up
Once you’re done, you can clean up the deployments with the following commands:
make clean-app
make clean-cluster
Optimizing the Deployment
To ensure optimal HA, you could distribute the VMs across different regions.
For more information on the deployment above and a basic analysis of the potential failures, read this documentation: https://github.com/mik-tf/k3scluster/blob/main/docs/hacluster_docs.md. We also explain simply why this type of HA cluster does not have a single point of failure.
Conclusion
This guide serves as a basic introduction to deploying HA cluster on the grid. Make sure to properly understand the risks implied before deploying your cluster.
Feel free to ask questions and provide feedback.
If you’ve ever deployed HA deployments on the grid, let us know and please share your experience!