Utilize HDD space with Zdbfs

We periodically get questions about how to use the HDD capacity available on the Grid. As it stands today, the only way to reserve HDD space is via a Zero DB (Zdb). While Zdb doesn’t provide the ability to store files out of the box, we do also have a tool called Zero DB Filesystem (Zdbfs) that puts a filesystem interface in front of Zdb storage capacity.

So as promised and without further adieu, here’s a guide on how to store some files on HDD on the ThreeFold Grid.

Prerequisites

  1. Account funded with TFT
  2. Basic familiarity with the Dashboard or another way of deploying a Grid VM and connecting via SSH (we will use a micro VM for this guide, and any prepackaged application from the dashboard will work the same)
  3. A Mac or Linux machine (including WSL) for running tfcmd to create Zdb deployments (this can be a Grid VM—see the caveat below)

Regarding item 3, using tfcmd on any system currently requires storing the seed phrase used to make the deployments to disk. This means that if you use a Grid VM, the seed phrase will get stored to the disk of the node and it’s possible that the farmer could recover it.

Before you enter your seed phrase anywhere, please assess the risk. If you have a node, using a VM on your own node to deploy the Zdbs would be a safe option, even if you’re not using your node in the final deployment. Otherwise, if you are taking this path, choose a node from a farmer you can trust.

Last thing I’ll say on this: if you’re a Windows user and haven’t already, consider getting set up with WSL. It’s a rather slick way to bootstrap a Linux environment that you can use to do lots of fun stuff with ThreeFold technology and beyond.

Let’s go

Alright, if you’re still with me, great. I’ll try to scare you off too in the following sections :slight_smile:

The first thing we need to do is plan our deployment. For best results, we need a node that has HDD space available and also that has IPv6 enabled. We don’t need to use IPv6 outside the deployment, but for now it provides the best way to connect to the Zdbs.

You can find appropriate nodes using the node finder in the Dashboard. IPv6 connectivity can be deduced by looking in the Interface Details section:

Here we can see there’s both a LAN IPv4 address and an IPv6 address separated by a pipe character. IPv4 only nodes will only have an IPv4 address here.

Deploy the VM

I won’t cover the VM deployment in detail. You can choose whatever method works for you to connect to the VM over SSH. Wireguard is a popular and reliable cross platform option.

Remember that I’m covering the steps specifically for a micro VM using Zinit. That includes the micro VM deployments from the Dashboard and all of our prepackaged application images.

Deploy the Zdbs

Now let’s deploy the Zdbs. For this we’ll use tfcmd, which is released with our Go SDK. You could also use Terraform at this step, but I won’t cover that.

Install and login to tfcmd before proceeding. The steps are covered in the manual.

We will deploy three separate Zdbs, as required by Zdbfs. We need a strong password for these Zdbfs. You could use a password manager, for example, to generate a strong random password of at least twelve characters.

Here’s another way to generate a random password from a Linux command line (this generates 25 characters):

openssl rand -base64 18

The three Zdbs for Zdbfs are used for data, metadata, and temporary storage. You can name them anything you want, but using a name related to their function will help keep things organized. For the metadata and temp Zdbs, 1gb of storage is fine (that’s the minimum tfcmd can deploy). The size of the data Zdb determines the capacity you’ll have available to store files, so choose that according to your needs.

Here are example invocations of tfcmd. Here we deploy to node 1 with a data size of 10gb Remember to replace the node id with the node your VM is running on, and again, use a strong random password:

tfcmd deploy zdb --mode seq --node 1 -n meta --size 1 --password "password"
tfcmd deploy zdb --mode seq --node 1 -n temp --size 1 --password "password"
tfcmd deploy zdb --mode seq --node 1 -n data --size 10 --password "password"

Once the deployments are complete, get the info for each one:

tfcmd get zdb meta
tfcmd get zdb temp
tfcmd get zdb data

Here’s an example output, trimmed to just show the relevant section:

	"Zdbs": [
		{
			"name": "meta0",
			"password": "abc123",
			"public": false,
			"size": 1,
			"description": "",
			"mode": "seq",
			"ips": [
				"2a02:1802:5e:0:c11:7dff:fe8e:83bb", <--
				"300:1b0f:ec2d:be82:a2b6:7aca:1083:6c1",
				"5f0:1801:a90b:3841:8eb6:277c:1bb5:c5e0"
			],
			"port": 9900,
			"namespace": "18-622157-meta0" <--
		}
	]

I’ve added arrows pointing to the important parts. The first IP address in the list is the IPv6 address, and that’s the one we want. The IP is generally the same for all Zdbs on the same node (assuming they are on the same disk). But the namespaces will all be different, based on the names you gave before.

We’ll use these in the next section.

Install Zdbfs

Next we’ll install the Zdbfs binary into the VM that will mount the filesystem. Normally you’d want to grab the latest release from the official repo. In this case, we’ll use a release from my fork that has a couple of small fixes:

wget -O /usr/local/bin/zdbfs https://github.com/scottyeager/0-db-fs/releases/download/v0.12.0/zdbfs-0.1.12-syfork-amd64-linux-static
chmod +x /usr/local/bin/zdbfs
zdbfs -V

If you see Zdbfs print its version info, then the install was successful.

Choose mount point

You can mount your Zdbfs on any valid path that doesn’t interfere with the rest of the system. If you’re not sure, then follow along with the example. First, we need to make sure the path exists:

mkdir /mnt/zdbfs

Create Zinit service

For this step, we’ll need a text editor. Micro VMs don’t come with one installed, so go ahead and install your favorite CLI editor now:

apt update && apt install -y nano

Now create the Zinit service file:

nano /etc/zinit/zdbfs.yaml

Here’s a template to copy and paste:

exec: |
  bash -c '
    /usr/local/bin/zdbfs \
    -o host=<IP-address> \
    -o port=9900 \
    -o mn=<meta-ns> \
    -o dn=<data-ns> \
    -o tn=<temp-ns> \
    -o ms=$PASSWORD \
    -o ds=$PASSWORD \
    -o ts=$PASSWORD \
    -o size=10737418240 \
    /mnt/zdbfs
  '
env:
  PASSWORD: <password>

You’ll need to edit this template and fill in your own values anywhere you see angle brackets. Reference the example below to see how it should look when you fill in the values.

There are also two optional changes you can make:

  1. Set the display size of the filesystem by changing the size option. Ideally this should match the size of your data Zdb. The size is given in byte, so multiply the number of gigabytes you reserved by 1073741824. The example above shows 10g
  2. Change the mount point by replacing /mnt/zdbfs with your own choice of mount point

To find the needed info on the Zdbs you deployed before, reference the outputs of your tfcmd get commands from before.

Here’s an example of how to start filling in the values, based on the example tfcmd output from before:

exec: |
  bash -c '
    /usr/local/bin/zdbfs \
    -o host=2a02:1802:5e:0:c11:7dff:fe8e:83bb \
    -o port=9900 \
    -o mn=18-622157-meta0 \
    -o dn=<data-ns> \
    -o tn=<temp-ns> \
    -o ms=$PASSWORD \
    -o ds=$PASSWORD \
    -o ts=$PASSWORD \
    -o size=10737418240 \
    /mnt/zdbfs
  '
env:
  PASSWORD: abc123

Continue by also filling in the namespace values from the data and temp namespaces. Then you can save the yaml file and exit the text editor.

Start Zdbfs

The last step is to start up the Zdbfs service. You only need to do this once. From here on, Zinit will ensure that Zdbfs is always running, even if the VM reboots for any reason. The monitor command instructs Zinit to look for a yaml file with the provided name and start up the service:

zinit monitor zdbfs

You can quickly check for any issues like this:

zinit

# zdbfs: Success

If there was any error starting Zdbfs, that would be shown instead. You can check the logs for any clues like this:

zinit log zdbfs

A final check that everything is working properly is to run df:

df -h

# Filesystem      Size  Used Avail Use% Mounted on
# zdbfs            10G  1.0K   10G   1% /mnt/zdbfs

Now you can start reading and writing files to the mount point, and they will be stored on HDD in your Zdb.

Final thoughts

While this isn’t the most straightforward process, I hope that this guide was able to get you started with using ThreeFold HDD capacity in a relatively simple way. If you had any trouble, please leave a reply below and I’ll do my best to help.

Cheers :v:t2:

3 Likes

Part II - Nextcloud

So, you got your Zdbfs mounted, and now you want to make that juicy HDD storage part of your NextCloud deployment? Look no further—I’ve got you covered here in part two.

Prerequisites

I’m going to assume that you’ve already followed the guide above and that your Zdbfs is working. If not, get that sorted first.

Secondly, this assumes that you’re using a NextCloud instance deployed from the ThreeFold Dashboard, or another instance that uses the NextCloud AIO deployment method.

Let the fun begin

Some of these steps are basically generic to AIO and are lifted from the AIO docs. We also need to make a couple tweaks to how Zdbfs is run so everything plays nice together.

Add mount to AIO start command

The first thing we’ll do is instruct AIO to add our Zdbfs mount point to the NextCloud container. This is a prerequisite for NextCloud to be able to read and write to that location.

For this, we’ll edit the script that starts AIO:

nano /scripts/nextcloud.sh

Within, we’ll add a line that sets NEXTCLOUD_MOUNT to our Zdbfs mount point. Here’s an example of the final result, using the example mount point of /mnt/zdbfs:

#!/bin/bash

export COMPOSE_HTTP_TIMEOUT=800
while ! docker info > /dev/null 2>&1; do
    echo docker not ready
    sleep 2
done

docker run \
--init \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 8000:8000 \
--publish 8080:8080 \
--env APACHE_PORT=11000 \
--env APACHE_IP_BINDING=0.0.0.0 \
--env SKIP_DOMAIN_VALIDATION=true \
--env NEXTCLOUD_MOUNT="/mnt/zdbfs/" \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest

Now we need to stop and remove the AIO master container. It will get recreated automatically by Zinit:

docker stop nextcloud-aio-mastercontainer
docker rm nextcloud-aio-mastercontainer

Edit Zdbfs service file

We also need to make a couple changes to the Zinit service file for Zdbfs. In order for the NextCloud app to access the filesystem as a non root user, we need to set the allow_other option. Additionally, one quirk of pairing these systems together is that if Zdbfs happens to restart for some reason, it will become unavailable to NextClound until the NextCloud container is restarted.

To address the second point, we’ll also add a line to restart the container after a one second delay anytime Zdbfs is started up. Be aware that this means there will be a short downtime for your instance if Zdbfs restarts. Since there’s no reason for Zdbfs to restart under normal operation, this should happen very rarely, if ever.

To edit the Zdbfs service file again:

nano /etc/zinit/zdbfs.yaml

Here’s the updated example template:

exec: |
  bash -c '
    sleep 1 && docker restart nextcloud-aio-nextcloud &
    /usr/local/bin/zdbfs \
    -o host=<IP-address> \
    -o port=9900 \
    -o mn=<meta-ns> \
    -o dn=<data-ns> \
    -o tn=<temp-ns> \
    -o ms=$PASSWORD \
    -o ds=$PASSWORD \
    -o ts=$PASSWORD \
    -o size=10737418240 \
    -o allow_other \
    /mnt
  '
env:
  PASSWORD: <password>

Save the file, and then to make it take effect:

zinit stop zdbfs
zinit forget zdbfs
zinit monitor zdbfs

Make sure it came back up successfully:

df -h

# Filesystem      Size  Used Avail Use% Mounted on
# zdbfs            10G  1.0K   10G   1% /mnt

Restart app containers

Next, navigate to the AIO admin interface, stop all containers, and start them again. This will give AIO a chance to apply the configuration update we made earlier.

image

After that, the Zdbfs mount point should also be available inside the NextCloud container.

Enable external storage

At this point, we’ll switch to the NextCloud admin interface for the final steps.

First, head to the disabled app settings:

Then hit the enable button on the right side (hidden in this screenshot) for External storage support.

Add storage path

In the home stretch now, head over to the external storage section of admin settings:

You can choose any name you like for the folder. The important thing here is that you choose Local as the storage type and set the path to the correct mount point. There’s also an option to choose which users this storage is available to, which is up to you.

If all went well, you’ll see the green check mark on the left side. Otherwise, there might be a red icon indicating an error. In case of error, I advise walking through the steps again to make sure you completed them all.

Conclusion

Now you’ll see the “external” storage folder in the files section of your NextCloud instance. Any files you upload outside of the new folder will still go to the original SSD capacity reserved with the deployment. This opens the possibility of using each type of storage for it’s particular characteristics, though any more information on that is outside the scope of this guide :slight_smile:

1 Like

That is sweet. Amazing trick. I am sure it will turn quite useful for Nextcloud users!

Great work @scott.

1 Like