How to clear disks for DIY 3Nodes?

A note if you’re having trouble getting your disks recognized by Zos: some farmers have had success enabling AHCI mode for SATA in their BIOS

Clearing disks is necessary in order for Zero OS to make use of them. I’ll explain a method that uses a live Linux distro for the job, and also link to a guide for accomplishing this within Windows. The Linux shell commands may work on MacOS too—if you try it please let me know.

From Linux

You can use a minimal live Linux distribution like grml or Ubuntu Server to boot the system to a command prompt and enter the commands below. I like grml because it’s small version is only 400mb, whereas Ubuntu Server is 1.4gb. However, some farmers report that grml won’t boot on their system.

After you download the live Linux iso file, burn it to a USB stick using a tool like dd or balenaEtcher. Then plug the USB stick into the 3Node and select it as the boot device.

Please use extreme caution with the commands below to avoid unintended data loss. Following this guide will clear everything from the disks in the system.

At the terminal

Most of the commands below need to be run as root. Grml gives you a root command prompt by default, but other distros may not. If you see a $ sign rather than a # on the terminal, you’re not root. You can run this command first to switch to the root account:

sudo su root

To see what disks are connected, run (that’s an “l” for “lion”):

fdisk -l

Take note of whether you see nvme in any of the outputs. You can identify which disk is the USB hosting the live Linux with:

df -h

Look for the entry matching the size of your USB stick. You’ll see an error that this device is busy, which is fine since we don’t need to wipe the USB stick.

On some distributions besides grml, you might also notice other disks listed in this output, which means they were auto mounted. If that’s the case, change the command below to wipefs -af to force wiping even mounted disks, including the USB stick.

To clear all disks, run this command:

for i in /dev/sd*; do wipefs -a $i; done

If you have any fdisk entries that look like /dev/nvme, you’ll need to do this too:

for i in /dev/nvme*; do wipefs -a $i; done

For each disk where there was something to be wiped, you’ll see a few messages that some bytes were erased at some location. To check for success, you can run fdisk -l again. Only the USB stick should have a Disklabel entry, every other disk should not.

Then you’re finished. Try booting up Zos now and check that all disks are properly recognized.


Try this guide:


Hehe, I used clean all command with diskpart for permanently erase all data without retrieving.

Thanks for the tips!

If I have used the disk before (i.e. with personal data as NAS storage) I do prefer to use ATA Secure Erase by booting from Clonezilla on bootable USB or booting xyznetboot.
It takes longer than wipefs but less than zeroing up with DD or similar, works with both HDD&SSD. After that ZeroOS finds it and can use the disks just fine.

Disk become “not initialised” almost like a factory ones🙃


1 Like

Thanks for the report! This seems like an excellent route for those who want to ensure all data is removed in a more efficient manner than dd.

1 Like

I am having a hard time erasing the hard drives using the gmrl distro.
fdisk does not wotrk
df does not work

and wipefs -a and other variations does not work

Can someone provide a basic command prompt?


This is just an example:

wipefs -a /dev/sda

Be careful with that command!

Example commands are in the original post above.

Great post.

I’ve heard by some farmers that it won’t work properly if you do not write sudo in front of the line of code.

If this is so, could we add this (sudo) on the original post? So people will include it when they read the instructions.


^This might have been what I was missing.
I gave up though. Since the 1.5TB or so HDD’s the 720 came with don’t add much to the node compared to the 4TB SSD’s I added I left the HDD’s as is.

Oh that’s sad. But at least now you have one more clue for troubleshooting this kind of problem. So perhaps later on you will try once more, in one of your maintenance windows. I try to parse my time on the 3nodes, otherwise it takes too much time in one session (for upgrades, etc.).

Also, if the HDD are not even connected to the Grid, you can simply remove them and their cables, it would take less power from the server. So more rewards, in a way!

1 Like

yes please. In order to make changes to a disks partition table (layout) you need to be an administrator in the OS you are using. I you are not logged in ad root (or something similar in windows / macos) then the sudo command allows you to execute one single command as root / administrator. So indeed please add it to the original post.


Original post was written referencing GRML specifically, where you land on a root terminal by default. On the flip side, attempting to use sudo in this case might produce an error :wink:

I’ll edit the post to mention that root is needed.

1 Like

Thanks for the additional information. It’s great.

If you see a $ sign rather than a # on the terminal, you’re not root.

I’ve learned something today! Nice to know.

So with wipefs -a /dev/sd* I was getting the message: probing initialization failed. Device or resource busy.

Command wipefs -af /dev/sd* fixed it.

1 Like

Thanks for the heads up. Very nice to know it worked out.

I will add this for sure on the FAQ.

For the record NORMALLY when you get that error you are trying to wipe your boot USB by accident.

1 Like

But why would -af work and not -a?

The f tag normally stands for force.

1 Like

Hi there! I came here to share my experience

Yesterday I had my first 3node running :raised_hands:

  1. I followed the documentation and came the moment I needed to clear disks.
    I tried gmrl option described above without success, it didn’t boot at all after some attempts…

  2. Then I booted with a ubuntu light version (lubuntu) on USB stick without any problem and was able to proceed to the wipefs commands (sometimes needed to force with -f) on all the /dev/sd* and /dev/nvme* partitions.
    WelI I assume it went fine because I didn t know what to expect in terms of result.
    Hence the question I have here:
    How can we guaranry that the wipe process was successful, is there a way to check that?


So you are not booting at all? That sounds more like a boot media issue as opposed to an ssd clearing issue. You can assume an ssd issue of you get a ssd not found error.