In the following DIY guide, you will learn how to turn a Dell server (R620, R720) into a 3node farming on the Threefold Grid 3.0.
Dell R620 1U server
Some will recommend to wear anti-static gloves as shown here. If you don’t have anti-static gloves, you can simply never forget to touch the metal side of the server before manipulating the hardware. Your hands will discharge the static on the outside of the box, which is secure.
Here is one of the two 2TB SSD NVME m.2 that we will install on the server. Above the SSD is the PCIe Gen 3, x4 that we will use to connect the SSD to the server.
You can see at the left of the adaptor that there is a metal piece that can be used to hold more firmly the PCIe adaptor and the SSD. We will remove it for this DIY build. Why? Because it is not necessary as the adaptor can hold the weight of the SSD. Also, this metal piece is full while the brackets in the server have holes in it. This will ensure a better airflow and thus less heat.
We remove the screws with a star screwdriver.
This SSD already has a heatsink. There is no need to use the heatsink included in the PCIe adaptor kit. If you remove the heatsink or the sticker under the SSD, you will lose your 5 years warranty.
When you put the SSD in the adaptor, make sure you have the opening in line with the adaptor.
Fitting in the SSD takes some force. Do not overdo it and take your time!
It’s normal that the unscrewed part is lifting in the air before you screw the SSD on the adaptor.
To screw the SSD in place, use the screwdriver included in the PCIe adaptor kit.
Now that’s a steady SSD!
It’s now time to get under the hood! Make sure the case is at the unlocked position. If you need to turn it to unlocked position, use a flathead screwdriver or the like.
Lift up the lock and the top server plate should glide to the back. You can remove the top of the server.
Here’s the full story! R620 and all!
To remove this plastic piece, simply lift with your fingers at the designated spot (follow the blue line!).
Here’s the RAMs! This R620 came already equipped with 256GB of rams dispersed in 16x16GB sticks.
To remove a stick, push on the clips on both sides. You can do it one at a time if you want. Make sure it doesn’t pop out and fall on a hardware piece! Once the clips are opened, pull out the RAM stick by holding it on the sides. This will ensure it does not get damaged.
Here’s the RAM in it’s purest form!
Here you can see that the gap is not in the middle of the RAM stick. You must be careful when inserting the RAM. Make sure you have the gap aligned with the RAM holder.
When you want to put a RAM stick in its slot, make sure the plastic holders on the sides are opened and insert the RAM stick. Make sure you align the RAM stick properly. You can then push on one side at a time until the RAM stick clicks in. You can do it both sides at once if you are at ease.
To put back the plastic protector, simply align the plastic piece with the two nudges in the metal case.
We will now remove this PCIe riser in order to connect the SSDs.
Optional step: put the SSDs and the PCIe riser next to each other so they can talk and break the ice. They will get to learn one another before going into the server to farm TFT.
Just like with RAM sticks, you want to make sure you are aligned with the slot.
Next, push the adaptor inside the riser’s opening. This takes some force too. If you are well aligned, it should be done with ease.
This is what the riser looks like with the two SSDs installed. Now you simply need to put the riser back inside the server.
Push down on the riser to insert it properly.
It’s good to notice that the inside of the top plate of the server has great pictures showing how to manipulate the hardware.
Now you will want to plug in the power cable in the PSU. Here we show two 495W PSUs. With 256GB of RAM and two SSDs NVME, it is better to use two 750W PSUs. Note that this server will only use around 100W at idle. There are two power cables for redundancy. The unit does not need more than one to function.
Plugging in the power cable is pretty straight forward. Just make sure you have the 3 pins oriented properly!
Then you plug the power cable in a surge protector. If you have unsteady electricity at your location, it might be good to use a UPS, uninterrupted power supply. A surge protector is essential to avoid overpowering and damaging the server.
Before starting the server, you can plug in the monitor and the keyboard as well as the ethernet cable. Make sure you plug the ethernet cable in one of the four NIC ports.
Now, power it on!
The server is booting.
If you want to change the DVD optical drive, push where indicated and remove the power and SATA cables.
The hardware part is done. Now you will want to set the BIOS properly as well as get the bootstrap image of Zero-OS.
Zero-OS Bootstrap Image
With R620 and R720 Dell servers, UEFI does not work well. You will want to use either a DVD or a USB in BIOS mode.
Go on https://bootstrap.grid.tf/ and download the appropriate image.
Write your farmer ID and make sure you select production mode. This is grid 3.0. Things are getting serious.
Use the ISO image for DVD boot and the USB image for USB BIOS boot (not UEFI). We use the farm ID 1 here as an example. Put your own farm ID.
For the ISO image, download the file and burn it on a DVD.
For the USB image, with Linux, you will want to do:
sudo dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd*. Here the * is to indicate that you must adjust according to your disk. To see your disks, write lsblk in the command window. Make sure you select the proper disk!
For Windows, you can use Rufus and Balena-Etcher, two free and open-source software that will let you create a bootstrap image on a USB key.
Before starting the server, plug in the USB bootstrap image. You can also insert the DVD once the server is on.
When you start the server, press F2 to get into System Setup.
Then, select System BIOS. In System BIOS settings, select Processor Settings.
There, make sure you have enabled the Logical Processor (Hyper Threading with HP). This turns 8 cores into 16 virtual cores!
It is also good to take a look at the Processors and make sure you have the right ones. Here we have two E5-2640 v2 at 2.00GHz. This is thus 16 cores, or 32 vcores, and a very lower power consumption.
Go to System BIOS Settings and select Boot Settings. In Boot Settings, choose BIOS and not UEFI as the Boot Mode. You need to save your preferences and comeback to select BIOS Boot Settings.
Once back in BIOS Boot Settings, go to Boot Sequence. Depending on your bootstrap image of Zero-OS, select either the USB key or the Optical Drive CD-DVD option. The name of the USB key can be Drive C or else depending on where you plugged it and your server model.
You can also disable the booting options you do not need. It can be good to have a DVD and a USB key with the bootstrap images for redundancy. If one boot fails, it would try with the other options of the boot sequence. This can be done with 2 USB keys too.
Boot Sequence Retry enabled will simply redo the booting sequence if the last time did not work.
You can then save your preferences and exit. Your server should restart and load the bootstrap image.
When you see this, be happy and wait for Zero-OS to boot.
The first time you boot a 3node, it will be written: “This node is not registered (farmer ***: NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes.
Once you have your node ID, you can also go on the Threefold explorer to see your 3node and verify that all is good.
Note that the main difference between the R620 and the R720 is that the former is a 1U and the latter a 2U. 2U servers are usually less noisy and generate less heat than 1U servers since they have a greater volume. In the R720, fans are bigger and thus less noisy. This can be an important factor to consider. Both offer great performances and work well with Zero-OS.
If you have questions or comments, please share your thoughts in the comment section.
I hope you learned something here and that it helped you build the New Internet.