Full VMs - now live on mainnet playground

Full virtual machines (VMs) have been available on the Grid for a while, but with the 3.7 release, it’s now very simple to deploy them through the playground on mainnet:

So, what’s a full VM, and why is this important?

Full and Micro VMs

Let’s start by noting the other kind of VM available on the Grid, the “micro” kind. These were the default VM type in the playground until the recent update, and they sometime caused some confusion for users who expected them to behave like a VPS available from other providers. That’s in essence what full VMs provide, a familiar and feature compatible offering to VMs one might run locally or in a cloud environment.

Micro VMs are basically a container image, as used in Docker or Kubernetes, that’s been promoted to run as a VM. Container images don’t include their own kernel, because they usually share the kernel from the host. Zero OS provides each micro VM its own basic kernel. This means that micro VMs are more isolated than containers, and this is more secure, especially with multiple users running workloads on the same 3Node. At the same time, they are still rather lightweight compared to a full VM and don’t have all the same capabilities.

What can you do with a full VM?

Full VMs are capable of anything you can do with a Linux server. They should be compatible with any guides and tutorials written for the same version of the distribution they are running.

What exactly can they do that micro VMs can’t? Here are a couple examples:

  • Run and manage services using systemd
  • Use software the requires certain kernel modules (using WireGuard inside a VM is one example that didn’t work for me in a micro VM)

We currently have several versions of Ubuntu available as default full VM options, and you can also enter any full VM flist link as an “other” option. See our official VMs section of the flist hub, for a couple more options. See the Cockpit VM images from @ParkerS for an example of what’s possible with full VMs, that you can deploy for yourself.

If you’re interested in adding new or customized VM images to the Grid, I recommend this post by @linkmark, as well as the Zos documentation on Zmachine.


Grid 3.7 is on fire!

Fantastic article that complements well the Technology Update part of the Threefold October 24 2022 Community Call.

What you said here is great:

This means that once we set up and gather in one place the necessary Threefold documentation on How to Deploy full VM on a 3node, the remaining documentation is already there online. That opens up major use cases. Amazing.

We could have some kind of compendium of many available guides and tutorials for users once they have a full VM at their command. This could be provided at the end of the “How To…VM” Threefold documentation as a kind of segue.

1 Like

This video covers the process of deploying full vm images including using the hub.


Excellent! Your video series covers so much ground. Really easy to follow.

nice video, i just wanted to try deploying my own. cause i dont clearly understand how networking between some VM’s in different locations works right now…

i want to test something around openssense as a vm and so on…

i mean right now we dont have something like firewalling built in (or?) so the only idea for me is to have a VMwith a firewall like opensense and some other vms which all can only communicate thru the vm with the firewall?

but i dont get the networking of TF… how do i get 2 VM’s - one in America and one in Europa - only communicating with each other, like in a lokal network or like a vpn connection?

or 2 VM’s in the same rack of one farm… when i rent 2 VM’s they dont know they are in the same rack , right? its like 2 separate vm’s in the big cloud of the grid. or do i misunderstand here something?

maybe the wrong thread - sorry :laughing:

Actually your pretty close on your understanding.

your workloads actually have two ways communicate with each other like they are in the same rack

  • z-net. in a Linux VM you’ll see this as the first interface and it will be private subnet ipv4 address. this is how the Kubernetes clusters communicate from different locations without having public ip addresses. they are connnected over z-net

  • you can also use planetary to connect between two workloads with planetary addresses.

If im understanding your goals correctly you could deploy a running ubuntu and use a a combination of firewalld, iptables, or nginx. if you expand on your goals i may be able to help you with a solution that available now

as for a native router os, there is a module for OpenWrt that allows it to use cloud-init, I am working on bringing this to the grid and have an issue open here, to verify if it could be made easily compatible.

1 Like

I dropped a post that should help

Thanks alot! also for your networking post well described!
, yesterday i dived deep into the manual also xD

my goals are that i can build up a whole Infrastrukturen just with terraform and ansible. and to set up also firewalling and lokal network etc.
further i to setup customers workloads just automated via terraforn and also firewalling should be automated. but if a customer want firewalling i want to have firewallcontainers or similar which are set up nicely via terraform.

that are just plans in my brain. it gets clear over the time im learning. its a process :wink:

it would nice if i got stuck that i can ask or more. i think we all have to support as much a possible to make this the best thing ever. :wink:

about the openwrt. im not a fan of opnwrt thats why i was looking for opnsense or pfsense possibilities.
i dont know cloud-init but it seems something like ansible? im also a techy person and into linux stuff and networking and coding, if you sometimes got stuck :wink: (“shutdown -t now” :wink: )

i found out that also opnsense has something cloud-init there https://github.com/opnsense/ports/tree/master/net/cloud-init

but there are also alot of community projects for opensense via ansible

We should be able to port ooensense onto the grid with that cloud-init module. I’ll try to deep dive that I think I’ve got the concepts nessecary to get a flist up for both of them without to much complication. My personal pc is tied up migrating to a new ssd because I ran out of space but I’m gonna get started on those this week!

You can always tag me or hit me up on telegram as mik has already noticed I’m almost always awake and online lol.

1 Like

ok nice, maybe i should openwrt give a chance… cause its Linux its working as container https://openwrt.org/docs/guide-user/virtualization/docker_host

while opnsense is a freebsd kernel…

i try already opnsense in this moment, maybe this weekend i have something. i want to lay my hands on xD

1 Like

Let us know how it goes!

To add a bit to the answer from @ParkerS

Every workload on the grid belongs to a private overlay network. It can also optionally have public ipv4, ipv6, and Planetary Network addresses attached. Workloads in the same network can communicate over their private overlay network, and you can also tunnel into the overlay, assuming one node in the network is a public access point. This is all done using WireGuard, and is currently only available via Terraform for deployments besides Kubernetes.

VMs deployed in the playground each go into their own overlay network, which is really more of a formality in that case than anything useful.

1 Like