Yeah, I raised an eyebrow at this initially too. “New” nodes are node IDs that were created during the week that the report is for. So nodes that were created earlier, were taken down, then became active again are not included in this figure.
That said, there are still only 8 more nodes in the active set than the week before. This points to the fact that in any given week, there are a significant number of nodes that become inactive, and therefore the actual churn in nodes is not reflected in these figures.
So the conclusion is that last week, a number of high capacity nodes rejoined the network with existing node IDs. Simultaneously, an approximately equal number of lower capacity nodes became inactive. I estimate the churn was around 80 nodes, so that’s an average difference of 25 cores between nodes in the first group and nodes in the second group to give a total difference of 2000.
I want to return to the question of node churn and try to understand better what’s happening. Are these mostly nodes that join the network for a short time and then disappear? Or are they nodes that stick around but consistently go down for a week or more at a time? And to the extent that it’s the latter, why?
One final note I’ll make here is that I don’t attempt to account for the fact that nodes can rejoin the network with a new node ID after the disks are wiped. My assumption is that it’s a small number, but in a week where 5 new node IDs are registered, even a single case would be 20%.