Roll back main-net release or not (SUPER URGENT) [Closed]

We voted for upgrade of 3.6 to mainnet which happened
see Release of TFGrid 3.6 (CLOSED)
There is an issue, 200 nodes from about 2500 nodes won’t come to life because their virtualization capabilities are not configured, which means the nodes cannot be used for production VM workloads which is a requirement for our grid.
We suggest we go on with the release. The current minting logic pays farmers proportionate to their uptime. This means affected nodes will not receive tokens for the period since the update until they have properly configured the bios settings. As an example, if it takes 1 day to configure this, you will get about 3% less tokens for an affected node…
This means we keep the release on main-net which means 200 nodes will be off until configured properly.

We are asking approval for this.


  • OK to keep main-net on 3.6 which means 200 nodes down until reconfigure
  • Not OK, roll back to previous release or alternative

0 voters

1 Like

Although 3 of my 5 nodes are down, I think it makes no sense to roll back because of 200 nodes!


Would be appreciated!


I would say we revert only this fix, since I think calculating the minting as suggested would be much harder and subtle to errors.

if we roll back the fix, then we keep on having 200 nodes which cannot be used, I am in favor to have a gentle push to configure the nodes properly, because its going to be like this on next upgrade anyhow, but thats just me


No, you are right. That was my opinion from the start actually. I was against the revert but I realized the other work in minting that need to be done is gonna be much more and probably way more complex.

please everyone, we changed some content in GEP, there will be no grace period because team tells me its too dangerous to change minting code. So if we don’t roll back then there will be days that people will no receive their TFT.

I absolutely firmly disagree with handicapping 2000+ nodes ability to be benefited by the improvements of this roll out because 200 nodes didn’t have virtualization turned on, on a machine with the sole purpose of running virtual machines.

If this was something less basic, or it was certified nodes that were effected I would feel differently. A DIY node is exactly that, DIY. if your going to compensate farmers simply because there wasn’t a post that said this was needed, when it would be needed to run any type of vm on that I get compensation because there was zero documentation referencing devices requiring drivers are incompatible? I mean I would never expect it, but if this is the standard we’re setting?

There’s a reason titans and Peking’s cost more then the bin of parts, because it’s pre assembled and are guaranteed to work.

1 Like

About 1/3 nodes that went offline with the update have already come back. Some farmers noticed other issues, like dead bios batteries, that prevented their nodes from booting, so this figure may still high for how many nodes actually have virtualization disabled.

Hmm, probably should have had turning on virtualization in that Lenovo vid I did. Whoops.

1 Like

Yeah, my first node, a Lenovo from your video, had this problem :smile: but it is solved!

Setting Virtualization in BIOS Settings:
To turn it on (enabled), it’s usually in System Security.

Example for HP Elite Desk:

  1. Press ESC key at the beginning of the HP welcome screen.
  2. Select the F10 setup option and enter the BIOS.
  3. Enable the virtualization options in the Security menu.

Glad I spend 3 hours until 1am in the morning getting my nodes fixed then! Not ideal as I was driving 9hrs today for a 3 week holiday though, but anything for the project!

Personally I would like to see some kind of manual compensation for those on leave and not able to change their nodes. Virtualization wasn’t in any of the config guides, and the way this update was handled didn’t meet proper business criteria.

Not a criticism, don’t get me wrong…we’re all learning here. But maybe some kind of SLA?

1 Like

Why on god’s green earth would not have virtualization enabled when contributing resources to a virtual environment/cloud…ie the 3fold grid. I find it hard to believe this is just now a requirement. Why were you paying out rewards to people who’s hardware couldn’t be virtualized? I am sure I am missing something here, been a long two weeks and super tired, but this seems cray-cray.

My Lenove went offline because of this issue. Fortunately it said to enable virtualization. I changed that and it is back to normal.

1 Like

Only see this reply now. Not everybody is technical enough to understand that. It’s not mentioned in any setup and ZOS accepted the systems without virtualization switched on.

Personally I had some issues with two of my Z840s and had to reset the bios… I simply forgot to switch virtualization back on.

Can we know how many 3nodes are left without virtualization out of the 200?

@scott perhaps?


There’s no simple way to achieve this that I’m aware of. Zos doesn’t post the virtualization to TF Chain or report it over RMB. An approach could be to look for nodes that were reporting uptime before the change but haven’t sent a report since.

I tried this, basically looking for all nodes that sent their last update within a one day period before when the change was released. The figure returned was ~115. Of course, this could also include nodes that went offline for any other reason during that time period and never came back online or were brought online with a new node id. I also think that the number of nodes which went offline for this reason was closer to 300.

Experience suggests that it sometimes takes a couple of minting cycles for farmers to notice their nodes are offline, unfortunately. I expect we’ll see some questions relating to this when August minting is completed.

1 Like

As of today, the number is down to 109, so that’s encouraging :slight_smile:

1 Like