Since I heard of Threefold I always saw that the team is pushing towards the scale out philosophy with multiple micro services running the workload and if you need to grow, you just push one more VM/Container to enhance capabilities
Still I’m fond of the good old monolithic VM which evolves through your needs by adding capacity directly to it. CPU, RAM or disk, there is a simplicity to it that I always loved, and I’m glad that current Threefold tech finally allow this paradigm to be possible. It’s not for everyone but if you’re like me, you’re at the right place on this thread
In order to do that, we need :
- A way to scale CPU and RAM when the need arise and keep the VM disks intact
- A way to add disk capacity when your disk space reaches the limit
- Some troubleshoot possibilities, meaning access to the console to see error message or interact directly with shell if networking is done
For the third point, the cloud-console is the way to go. If you deploy your VM with wireguard access, your VM displays the console on serial 0, which is accessible through your browser when your wireguard connection is up. Just go to your gateway address and choose the port 20000 + the last digit of the IP assigned to your VM. For example, if your wireguard network is 10.20.2.0/24 and your VM got the IP 10.20.2.2, go to http://10.20.2.1:20002 and you will be able to interact with your VM console !
In my tests, in order to do that, I had to modify some parameters to my /etc/default/grub (and of course run update-grub) so that it outputs the video to the right tty :
- Adding “console=ttyS0” to GRUB_CMDLINE_LINUX and GRUB_CMDLINE_LINUX_DEFAULTS
- setting GRUB_TERMINAL to “serial”
- And set a GRUB_TIMEOUT to 300 since seconds in serial console seems to happen a lot faster than in real life
For the second point, LVM is the way to go. It will allow you to grow your file systems by adding new disks to your virtual groups. Since I chose Debian for my linux distrib, I looked at the cloud images and was disappointed not to see LVM configured on them, so I installed a minimalist Debian VM with 2.5 GB of disk on my proxmox server with EFI enabled with the following partitioning schema :
- 128 MB for the EFI partition (FAT32)
- 256 MB for the boot partition (logical volume with ext2 FS)
- 2.1 GB for the root partition (logical volume with ext4 FS)
Full VM with raw disk creates some big flists since the whole disk is sent to the hub (then downloaded to your node when deploying), that’s why the disk is as small as possible. In fact just after deployment, you’ll only have 360 MB of space available for the root partition ! That’s also why you won’t see at first a swap partition, even if it is needed for scale-in VM.
That’s why I created the postDeploy.sh script located in /root/Scripts, which will extend the root volume group with the second disk of your deployment. I recommend to add a second disk with a size of 22.5 GB if you want to skip to the Small VM sizing defined by Threefold, but any size will do.
With that space available, a new swap partition will be created, following the rule of thumb described in this post
This swap space is activated and added to your fstab to use it automatically on next reboots.
The rest of the available space will be added to your root logical volume and the ext4 file system will be automatically expanded.
Last part of the script, changing the default password for root and the default user “user”. It is in the sudo group, so you want to do that. Do not keep the default password “changeme” !
This Debian image also has the following features :
- specific grub options has discussed in previous point
- Cloudinit enabled
- Cloudflare DNS configured in IPV4 and IPV6. DNS lines of /etc/dhcp/dhclient.conf had to be commented for the DHCP offer to not overwrite /etc/resolv.conf.
- ufw activated with only OpenSSH profile activated for inbound connections
- zdbfs binary installed if you want to add some HDD capacity
It is available here
And for the first point, terraform with fullVM and raw disks flist is the way to go
your grid deployment will look like this :
resource "grid_deployment" "d1" {
#/dev/vda
disks {
name = "root"
size = 2.5
}
#/dev/vdb
disks {
name = "extendRoot"
size = 22.5
}
name = “yourDeploymentName”
node = yourNodeId
network_name = grid_network.net1.name
vms {
name = “yourVmName”
flist = "https://hub.grid.tf/archit3kt.3bot/debian-12.7_raw.flist"
cpu = 1
#/dev/vda
mounts {
disk_name = "root"
mount_point = "/"
}
#/dev/vdb
mounts {
disk_name = "extendRoot"
mount_point = "/fakePath"
}
memory = 2048
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip6 = true or false
planetary = true or false
}
}
You can see that the first disk named “root” is mounted as “/”. This is how to activate the raw disk feature. The second disk is mounted in “/fakePath” which does not exist, since this raw disk images do not mount automatically disks, it is managed in your fstab (and also this disk will be added to your root volume group !). You still need to keep the mount_point entry in your terraform file, if not the disk will not be seen by the VM.
If you want to scale-in your VM, all you have to do is remove the vms part and skip the disks part of your terraform file, then terraform apply :
resource "grid_deployment" "d1" {
#/dev/vda
disks {
name = "root"
size = 2.5
}
#/dev/vdb
disks {
name = "extendRoot"
size = 22.5
}
name = “yourDeploymentName”
node = yourNodeId
network_name = grid_network.net1.name
}
At this stage, your VM will be decomissioned, but not its disks ! Next you put back the vms part with the modifications you want, for example here, adding a new disk, scaling CPU and RAM to medium size VM :
resource "grid_deployment" "d1" {
#/dev/vda
disks {
name = "root"
size = 2.5
}
#/dev/vdb
disks {
name = "extendRoot"
size = 22.5
}
#/dev/vdc
disks {
name = "newDisk"
size = 50
}
name = “yourDeploymentName”
node = yourNodeId
network_name = grid_network.net1.name
vms {
name = “yourVmName”
flist = "https://hub.grid.tf/archit3kt.3bot/debian-12.7_raw.flist"
cpu = 2
#/dev/vda
mounts {
disk_name = "root"
mount_point = "/"
}
#/dev/vdb
mounts {
disk_name = "extendRoot"
mount_point = "/fakePath"
}
#/dev/vdc
mounts {
disk_name = "newDisk"
mount_point = "/fakePath2"
}
memory = 4096
entrypoint = "/sbin/zinit init"
env_vars = {
SSH_KEY = var.SSH_KEY
}
publicip6 = true or false
planetary = true or false
}
}
terraform apply and your VM will come back with the new virtual hardware spec !
I hope you enjoyed this guide, don’t hesitate to share what you though of it and if you’ve got ideas to make it better !
I’d like to thank @Scott for his priceless help and availability, you rock man !
And please Threefold team, this raw disk feature and scale in possibility is really great, do not deprecate this feature !