wow, didn’t realize this for the power limitation, seems to be needed indeed
@delandtj you have this one?
Dedicated nodes on TFGrid testnet with GPU support (H2)
The main concern for GPU miners is a power limit and heat emission.
If the algorithm used on GPU is memory intensive, then the heat and wattage is low. If algorithm is core intensive, then heat emission and power is very high- 200-300% more wattage. The problems will start in a multi gpu configurations - people are trying to power limit their GPUs, use only memory intensive algorithms and keep the wattage in PSU at max 80% of the max power of PSU - so if you have 1000W PSU, you are fine with taking 800W - which is 6 x 130W, like RTX 3070.
Imagine what could happend if power limit would be not restricted.
Disaster, some PSU’s would turn themselves off, but some could blow up. Giving full control of GPUs in a multi GPU system seems very risky - if there will be no algorithm to keep them in safe settings, whatever of user config, there will be problems.
how are the GPU’s structured in terms of farming, usage, and regional electrical cost?
Regional electrical costs is something that needs to be investigated locally. Farming rules have not yet been established and will be discussed on the forum here to make they are well understood and considered before putting in place.
Thank you for the prompt response and clear information. It was the first thing i thought about as i also gpu mine 1.2 gigahash. I can’t wait to speak with the team on my validator election interview lol you will like what you hear.
Looking forward to bring some of that hashing capacity to the TF Grid and get some tensorflow workloads going
Its been a while since i messed with tensorflow
For DIY nodes, how will dedicated node work? Do you create a node and the GPU capacity shows up, then someone has to reserve the whole node to use the GPU?
Also, if we add GPUs to existing nodes, will the farming capacity automatically update? Or do we have to do some process to delete and recreate the node registration?
Adding hardware is normally automatic, we don’t know yet for GPU’s, but no reason to think it would be different
I think you are right. As we will need to add a kernel module / compile GPU specific code into the kernel it might require a reboot. But nothing more substantial than that
What about OC or undervolting will this be controlled by node or the purchaser? will these need to be set in compute mode or gaming? Will funds be calculated off of hashing power if so i believe node owner should be able to increase or decrease cost based on use case. If in compute you can essential undervolt the gpu reducing the electrical cost increasing compute power in gaming it would require a higher OC especially for remote procedures to meet the required demands of purchaser. Are there going to be cost standards based on quality brand or vram size? Will an evga 3090 ti generate a higher monthly revenue than say the evga 3090 ftw3? will lhr play a factor in these prices or in desirability to the purchaser? will it be a tiered system? minimum or maximum vram or core specification? i have so many more questions… How will multi gpu systems be calculated? Will sli be required or an optional feature thus allowing for a higher price target?
reach out to me weynand I will be using this processing power and can help with making sure you address the questions many will ask before they get asked. i have been mining for 3 years and with all equipment running produce about 2 gigahash on the eth network at about 3.8 kilowatts
@KryoVs_Networks That’s a lot of questions to which I have little answers as this point in time. I’ll reach out to someone from the R&D team to have a look at your questions and see what is in the spec’s / requirements documents. Stay tuned, we’re on it but this is not all through out and documented yet.
Very cool. I am looking forward to trying out this feature when it becomes available.
Difficulty I see with GPUs is that overclocking, necessary to obtain optimal balance between power consumption and computing, is different for every usage (algorithm).
I also have a couple of mining rigs with in total 900 Mh/s mining Eth. I would not feel comfortable to stack a few 3080TIs in my HP Z840 workstations without able to control the heat, power, etc. Maybe if I could deploy the entire mining rig as a node, that would be different.
Interesting stuff though…
thanks weynand i look forward to response
Any updates on GPU integration? Tesla M40 24GB cards are cheap and abundant on eBay. Curious if they’d be a good fit.
They are from 2015, so probably not.