Dedicated nodes on TFGrid testnet with GPU support (H2)

image

don’t throw away your GPU’s once Ethereum goes v2.0

We will support GPU’s < end Q2 2022 in dedicated nodes.

GPU’s are amazing for lots of workloads like

  • deep learning
  • artificial intelligence
  • compression/encryption

At start GPU’s will only be supported in dedicated nodes.

We expect this to drive a lot of compute traffic to TFGrid.

Stay tuned.

Kristof

10 Likes

Cool, thanks for sharing!

That sounds really interesting! Looking forward hearing more about that.

We’re already talking to a number of ethereum miners that have large quantities of GPU’s mining Ether. They are talking to us and looking for a second life of their equipment. This is building towards a more sustainable world by giving used hardware a new purpose and helping the machine learning and AI communities to have decentralized GPU cloud capacity.

2 Likes

awesome! Came here to check on GPU-readiness.

Hi Kristof,

Any update on which GPU is recommended in the future?

For now, I think CUDA based, hence Nvidia. We still need to look into the way how to use cuda/ocelot for amd cards, and have some working vm that can access the GPU properly. It seems straightforward to do so, thus unless we encounter some dragons, it’s on the next todo list, after integration of cloud-init.
Cloud-init would be necessary to be able to run non-in-house baked kernels in the vm, so that drivers get installed properly. That way we also have a stepping stone to do OpenCL and AMD too.
So yeah, somewhere in Q2, and starting with NVidia cards.

3 Likes

Please keep in mind that there will need to be a way to limit power usage of the GPU, core and mem speed.

  • GPU like RTX 3070 can take 130W with a good settings, and 300W if it’s not limited - people are running few of GPUs on one power supply - of there will be no limits, nobody will connect their GPUs - no one will risk blowing their power supply.
  • heat emission - there are some algorithms that makes GPU much hotter than normal. With limiting power, core and mem, this heat emission can be predictable. You don’t want the situation when 4 graphic cards runs at 85 Celsius, taking 300W each and burning power supply.
    Example configuration from HiveOS (linux) mining os that limits those values
    Screenshot_20220323-003320

wow, didn’t realize this for the power limitation, seems to be needed indeed
@delandtj you have this one?

The main concern for GPU miners is a power limit and heat emission.
If the algorithm used on GPU is memory intensive, then the heat and wattage is low. If algorithm is core intensive, then heat emission and power is very high- 200-300% more wattage. The problems will start in a multi gpu configurations - people are trying to power limit their GPUs, use only memory intensive algorithms and keep the wattage in PSU at max 80% of the max power of PSU - so if you have 1000W PSU, you are fine with taking 800W - which is 6 x 130W, like RTX 3070.
Imagine what could happend if power limit would be not restricted.
Disaster, some PSU’s would turn themselves off, but some could blow up. Giving full control of GPUs in a multi GPU system seems very risky - if there will be no algorithm to keep them in safe settings, whatever of user config, there will be problems.

2 Likes

how are the GPU’s structured in terms of farming, usage, and regional electrical cost?

Regional electrical costs is something that needs to be investigated locally. Farming rules have not yet been established and will be discussed on the forum here to make they are well understood and considered before putting in place.

1 Like

Thank you for the prompt response and clear information. It was the first thing i thought about as i also gpu mine 1.2 gigahash. I can’t wait to speak with the team on my validator election interview lol you will like what you hear.

1 Like

Looking forward to bring some of that hashing capacity to the TF Grid and get some tensorflow workloads going :slight_smile:

1 Like

Its been a while since i messed with tensorflow

For DIY nodes, how will dedicated node work? Do you create a node and the GPU capacity shows up, then someone has to reserve the whole node to use the GPU?

Also, if we add GPUs to existing nodes, will the farming capacity automatically update? Or do we have to do some process to delete and recreate the node registration?

Adding hardware is normally automatic, we don’t know yet for GPU’s, but no reason to think it would be different

I think you are right. As we will need to add a kernel module / compile GPU specific code into the kernel it might require a reboot. But nothing more substantial than that :slight_smile:

1 Like

What about OC or undervolting will this be controlled by node or the purchaser? will these need to be set in compute mode or gaming? Will funds be calculated off of hashing power if so i believe node owner should be able to increase or decrease cost based on use case. If in compute you can essential undervolt the gpu reducing the electrical cost increasing compute power in gaming it would require a higher OC especially for remote procedures to meet the required demands of purchaser. Are there going to be cost standards based on quality brand or vram size? Will an evga 3090 ti generate a higher monthly revenue than say the evga 3090 ftw3? will lhr play a factor in these prices or in desirability to the purchaser? will it be a tiered system? minimum or maximum vram or core specification? i have so many more questions… How will multi gpu systems be calculated? Will sli be required or an optional feature thus allowing for a higher price target?