The role of the CPU in farming rewards

@maxux42 did a lot of amazing work creating a tool to benchmark CPU’s in relation to our usecase and the first result was how there might be an issue in how we measure CPU capacity at this stage, we only look at memory and nr of CPU cores (virtual ones)

As you can see below modern CPU’s are so much more powerful compared to older ones, if we want to help the planet we will have to take this into consideration.

prod-01 (E5-2660 v4, 56 threads)
[+] single thread score: 504,863
[+] multi-threads score: 21,266,970

Lee AMD Ryzen 5950x (32 threads)
[+] single thread score: 818,563
[+] multi-threads score: 22,596,528

Jan AMD Ryzen 9 3950X (32 threads)
[+] single thread score: 772,981
[+] multi-threads score: 21,619,355

my home server (E5-2630 v3, 32 threads)
[+] single thread score: 395,595
[+] multi-threads score: 11,914,274

home old server (L5640, 24 threads, 12 years old)
[+] single thread score: 265,695
[+] multi-threads score: 4,770,021

Xeon D-1541 (16 threads)
[+] single thread score: 392,336
[+] multi-threads score: 5,924,587

my laptop (i7-8750H, 12 threads)
[+] single thread score: 573,909
[+] multi-threads score: 5,515,932

help us in Who wants to help finalize specs for TFGrid v4.0


Would potentially TFT be doing some sort of benchmarking to reward newer/more powerful CPUs better?

1 Like

Probably be easier just to detect the CPU and reference the benchmark since there has been a lot of bench marking done externally by various resources. Good news is I think most of the grid would show decent CPU’s installed. I don’t use anything below a E5-2650 v2.

True, it would be easier.
I would be happy as i’m building new machine with Ryzen 5 PRO 4650G (got a good price for it).

personally I believe, there needs to be link between CPU capabilities & reward, this needs to become part of our spec for TFGrid 4.0

1 Like

Hello :slight_smile:

Yep, I’ve put some work on a way to rethink how the current capacity computing and reward system works.
ThreeFold main goal and philosophy is fair an equity to make a better world for everybody.

The current way on how we compute, reward and show grid capacity only rely on numbers (amount of cores, amount of GBs of storage, …).
This way of thinking and computing was pretty good in a first step, to get the grid alive. It’s now time to make it more robust and more fair.

We need to keep in mind couple things:

  • It’s important to reuse old hardware and not throw it away if it still works
  • In this always evolving world, some application needs high power machine, we need some good hardware too

But right now, it’s not fair to reward the same amount for a 24 threads CPU 12 years old and a recent 24 threads CPU, which are way more powerful. This is true for SSD as well, old SATA 300 MB/s SSD vs a new PCI Gen4 NVMe at 7 GB/s should not have the same reward, thus they don’t cost the same at all.

In addition, the price billed for hardware should be scaled as well: it’s not fair to be billed the same price for a high recent powerful machine and for an old machine. And schema would also have another benefit, paying less to host workload on old hardware if this hardware fits the needs, overkill is definitively not good for anyone.

I’ve put some work in order to find a way to get a score based on performance, which cannot be faked in the meantime.
This score is based on a problem-to-solve mechanism and the fastest you can solve the problem, the more powerful you are, you can’t fake this mechanism since you cannot be faster than what you are, and this can be verified. Relying on existing benchmark score without any proof-of-capacity is an open door to fake results, if there is no proof of power behind and thus, getting reward for hardware you don’t really have.

Here are some numbers to illustrate the high difference you can get between old and recent CPUs:

2x E5-2660 v4, 56 threads (2016)
[+] single thread score: 504,863
[+] multi-threads score: 21,266,970

1x AMD Ryzen 5950x, 32 threads (2020)
[+] single thread score: 818,563
[+] multi-threads score: 22,596,528

1x Ryzen 9 3950X, 32 threads (2019)
[+] single thread score: 772,981
[+] multi-threads score: 21,619,355

2x E5-2630 v3, 32 threads (2014)
[+] single thread score: 395,595
[+] multi-threads score: 11,914,274

2x L5640, 24 threads (2010)
[+] single thread score: 265,695
[+] multi-threads score: 4,770,021

1x Xeon D-1541, 16 threads (2015)
[+] single thread score: 392,336
[+] multi-threads score: 5,924,587

1x i7-8750H, 12 threads (2018)
[+] single thread score: 573,909
[+] multi-threads score: 5,515,932

As you can see, 12 years old CPU with 24 threads have a lower score than a 16 threads more recent CPU, it’s unfair to reward the old CPU better than the 16 threads.

In the same time, we will be able to show in the explorer, node score for CPU, storage, etc. in order to make the selection easier and more in line with target application.

Some people reported difference between DDR3 and DDR4 should be take in account as well, but the major issue here is the difficulty to prove capacity and performance, since there is no way to certifiy speed with enough accuracy, but it’s in my opinion fair to say that CPU and memory are quite equivalant in performance (proportionally) because you can’t mix old and new CPU and old/new Memory module.


My 2 cents; I’m one of those that initially followed instructions and YouTube tutorials to build my own diy nodes. I’m very much in favour of the above, it makes enough sense to me. But only if current rewards are the benchmark and going up from there. You can’t suddenly punish early adapters for using slightly older hardware, but rather reward newer and better specs more.


For sure it’s unfair to punish early adopters, we will need to discuss how to get a good balance between new and old reward without impacting existing users.


Wow, really love to see the work on capacity verification (not just CPU but also disks and memory) lately by @maxux42, and now this excellent post on CPU performance :slight_smile:

I think the insight of memory and CPU speeds typically being paired is really valuable. Since old SSDs on the other hand can be attached to modern CPUs, I wonder to what extent SSD I/O becomes a bottleneck that should also limit rewards given for CPU speed?

Having performance metrics available in the explorer to help with selecting the right node for a given workload is something that the farming community has been asking about. This will be great to see.

For sure, I think existing farmers have given a big vote of confidence and should continue earning at their expected rate. When there’s more utilization and corresponding data, we can begin to make determinations about whether there is demand for certain categories of hardware at certain price points. We can revisit the question of adjust rewards down for the bottom bracket of performance, hopefully after the token price has increased and everyone has hit their ROI.

Charging more for fast capacity and paying more in rewards fits in the current model. There will be more tokens burned to use it, so there’s no problem in minting more tokens, at least long term.

Finally, the model can be tuned around some example configurations and ROI expectations. The goal would be to bring better balance to money invested vs tokens earned for hardware with different performance levels. I’m not sure if it makes sense for this to be essentially flat, or skewed to either direction. Requiring certain performance levels for Gold Certified would ensure there’s an incentive to build out with newer hardware while also encouraging farmers who buy newer hardware to put it in a data center environment.

I’m really happy to see there is (finally :slight_smile:) a topic on this and would also like to support grid 4 I this.

I really agree on the fact we should differentiate old and newer CPU’s, this was also my intention with the “ddr3 vs ddr4” topic.

I really (saw) a problem here on farming which is: right now there is no logic reason to put on good enterprise hardware because: why get something like gold certified this wil result in much higher power, because being Gold certified/ having good CPU’s means good hardware = more use.
I really got the feeling people are putting as bad hardware as possible so they have the least power cost at home. Which is in my opinion very logical, but making things like this fair will kill this.
This may sounds a bit out of the lines but I think this is the truth here right now, having no reason to put on better hardware since the machine from 2012 or cpu like @maxux42 is saying gives nothing less then the cpu from 2022.

But I don’t think it is a bad thing we’ve started like this also as @maxux42 says. But now is the time to make diferent classes because in the Server market there is no 1 identical server that’s the same and for that reason i think you can’t pay everyone the same for diferent hardware classes.

I think you should split in to couple classes and speeds, also regarding Down/up bandwidth. Just diferent location and performance other rewards.

As bad hardware as possible to save on energy? I just upgraded three servers to newer ones so they use less power.

I don’t think you understand what I mean, indeed newer servers will use less power but:

Why buy the best servers it like you said on the other topic they will be in use first and though the most:

This will lead in servers outputting 500-1000 watts if not more. Here in Europe that’s a lot of money.

So why not put v1 ddr3 servers up and cash and have no use in the next months. While the people that pay for the price and have the most “use” on their severs pay for all the electrics. Will this “used” rewards is only a 400 tft max or so per month.

Don’t wan’t to say this is the intention but I wan’t you to understand me on the fact that for now it is more logic to put on as bad as possible hardware to get a little to no use on the servers and have low electric bills as well.

This faces eachother a little bit in the wrong way if you ask me

It’s just a double punch:

  • Hardware not cheap
  • More use: so more power and more electricity

On the other hand:

  • Cheap hardware
  • Less use less

So double Winn there.

Either Save a few watts on buying newer severs which is maybe 50 max, how much did you save @FLnelson?

And have in use servers with:

  • 300- 1000 watts

Or get older severs with a 50 watt more and less use

Differentiate nodes with diferent performances and newer hardware.

Also in the Dc mostly you have to pay extra for internet use. Which will also be another cost while having much utalizations.

There needs to be a system to prevent this thinking from happening and I think the best and simple way is just to reward them diferent.

As for what you’re talking about is saving 50-100 watts?

While in use hardware will consume maybe 4-8x times as much.

I agree with what you’re saying @teisie. The current incentive scheme is in fact inverted. Two things will help with this. One is rewarding higher performing hardware at a higher base farming rate. The other is providing a reward for utilization (whether that’s an earning boost or unlocking some tokens which get locked as originally planned).

Boys boys hold up! I thought we are working on creating “people’s internet …owned by humanity”. Yes new hardware is always better but lets not get into the consumer based society trap.
I’ll cut things short, my opinion is these mega companies developed CPU’s and almost every year a new generation comes out, with more bells and whistles. It does not necessarily do a better job, or maybe it does but does it needs to!?
For example I’m using E5-2670 v2 and 2680 v2 performance when i look at the specification is on pair with E5-2630 and 2640 v4 (v2 is from 2014 and v4 are from 2016) hardware for utilizing the V4-s is lot more expensive. I’m not eve mentioning new hardware from last year. DDR3 ECC is perfectly fine and it will be fine. This is enterprise hardware that we are repurposing, and its designed to be “bulletproof” - the servers or workstations are mules.

Now what I’m trying to say is if the project cuts out "John, Amir, Tyron, Amy, Mark, Hans… etc. regular people worldwide that want to contribute, from the equation and from the chance to earn few bucks a month from their unused 4 core 8 threads mini PC or desktop, will the network be truly decentralized, and owned by people? or will some major player see the interest and takeover the project in some way with large centralized server farms?

I don’t know if this is possible I’m just saying.

I agree, V3 and V4 seems to offer nothing over V2 except being more expensive and require more expensive RAM.

But you can’t ignore the huge improvements that have been made since those CPU’s. The current Intel 8 core consumer CPU is about 70% higher on the benchmarks than the 8 core Xeon V2. I’m assuming that means you could reserve less cores for the same performance.

That being said, I have zero complaints with the light average person workloads I have running on V2. But people really using those cores may want the better CPU.

I don’t want to encourage people to (re) buy better hardware, I propose a more fair reward system.

If benchmark performance for v2 or v4 CPU doesn’t change a lot, there is no need to buy a v4 which is more expensive, reward would be based on the real performance, not the price you pay to buy the CPU. If CPU is twice time more expensive but can support twice the workload, I don’t see any problem if reward is doubled. That sound fair, isn’t ?

The goal is to provide the right hardware for the right workload (targeting hardware based on needs).

1 Like

I would like to add my 2cents. At the moment I am running 1 node made with an AMD 3950x, using 128GB ram and 2TB Nvme. Monthly earnings is estimated for 1063TFT.

I was thinking to upgrade the ram to 256GB and up to 3.2TB in ssd (the same 2TB nvme + a 1.2TB not sure which type), to maximize the TFT I can earn for this type of CPU. Which would be about 2083TFT, a thousand more!

It is indeed a newer type CPU, which is faster and uses less power. But because of consumer based, limited to 128GB ram. So, there goes my idea to maximize the amount of TFT for those cores/threads.

Luckily, I have an older DL380P G8 with 2x E5-2690, which comes to the same amount of cores/threads, but I can actually achieve that sweet spot of 32vCPU, 256GB ram and 3200GB SSD.
The build cost is lower for the server, but will use more power. I am not sure how this will balance out in time. But maybe I can run some test.

Of coarse, in my scenario, I am comparing consumer grade with server quality.

Very interesting comparison. Let us know if you’re able to get some data. I’d be curious to see how the power consumption and CPU benchmark of each system stacks up.

My 2 cents. The wheel is already invented. Just take any could provider as an example:
Tier 1 - Regular performance
Tier 2 - High-computation
and so one…
Tier 1 for the actual rewards based on minimum passmark 1000 points to XXXX
Tier 2 higher rewards form XXXX- (…)