Using the Grid as a Supercomputer

I thought about supercomputers and the best in the world cause nearby they are building one right now in Vienna.

Seeing our amount of Cores, Ram and so on we all together are leading the list together with the best supercomputers on the world. :stuck_out_tongue_winking_eye: :boom: :boom: :100:

So my question and idea is, would it be possible to use these resources like a supercomputer or like these “Seti” computing thing?

If yes, it would be nice if there is something built in where the grid helps good science projects with the power of these mostly idle nodes.

this could be solved for example if the farmer can choose for each node he has to allow this node for participating on this “supercomnputer-grid”. participating means, use all idle non reserved resources for calculating on science stuff.

Bonus: If workload is on the server the farmer gets more tokens.

Which science projects get the full power of this “supercomputer-grid” could be decided by monthly or yearly votes or similar.

Projetcs are for example:
Seti, climate change stuff, health, astrophysic, solutions for world problems, or feeding our own TF A.I. xD

Maybe this “supercomputer-grid” feature can also be rented to token paying customers like corps.

there are so much possibilities.

3 Likes

I certainly wouldn’t mind to have SETI or Folding@Home running on some percentage of my nodes available capacity. Then again, I’m fortunate to have inexpensive electricity.

It could be setup as a kind of “volunteer” workload that gets replaced when paying customers want to use the node. Cool idea, but would require devoting some scarce dev resources to make it happen.

As for paying out more tokens, this would be controversial for increasing total supply. Then again, if we’re the ones to find the extraterrestrials through such an initiative, I think the PR investment would be well worth it :stuck_out_tongue:

This would be for sure cool, I’m with SETI since the pre - BOINC time: 2004 (yikes, just checked my account there). Switched to worldcommunitygrid after they stopped sending out workloads in 2020.

Beside being cool, there might be really a market for workloads with high CPU demand. Isn’t that what FLUX or RENDER Network does? Accepting workloads (the latter specialized to GPU workloads) without the need for configuring a VM Server?

If someone / a company / a University project / … has demand for - lets say - 5000 CPU cores for 1 month. They need to do some calculation / simulation stuff. They don’t need their own rack full of servers, they just wanna get their stuff calculated.
So if we would have something similar to the BOINC client running on every 3node accepting workloads, we could offer such amounts of calculation power at one connection point. Sure there will be demands on how to accept workloads, but if someone is in need of such an amount of CPU power, they know how to split up workloads.

That might get the point @aernoud was talking about some days ago. “Possibility to deploy workloads on the grid.”

I can imagine that this is nothing that can be done short term. But it might also be something for farms with unstable power and / or slow internet connection. If you download your part of workload, get it done, and submit the results it’s no thing if you’re offline an hour during night and your upload speed is slow.

4 Likes

Yeah exactly you are thinking in the same direction as i do.

Clearly this would take much effort and development. But maybe this could be possible sometimes.

And also for the tokenomics there is a solution i think. This has to be calculated. Maybe on voluntary basis and additional if a university or other science corps buy some amount of 10000 cores for a week for example they buy tokens so its okey.

It just would be so sad if the possibility wouldn’t be used and tousands of servers idle around for nothing. Thats not the idea of the project i think, isn’t it also to save energy? :wink: