TFT token lock proposals [Closed]

Totally agree! I also thought the workloads would be distributed over the network and be self-healing. In my opioning this would be a unique selling point of TF if this could be applied

This has been a hot topic in the community lately, and I’ve been meaning to write a post about where we are and what’s possible. I don’t blame you for assuming that features like this already existing in the ThreeFold architecture. Especially older versions of the wiki were a little fuzzy about the current state of the tech versus what’s on the roadmap.

It turns out that building the foundation to enable autonomous and self healing applications took a while, but I think we’re now in a very solid position to talk about how to move these features forward.

The short answer is that systems with these properties are tricky to build. Removing one single point of failure tends to create another one upstream. Say you have a server hosting a website. You add another server for redundancy. Now you need a component (load balancer) that knows about both servers and routes traffic to and from them. But then you need a redundant load balancer. At some point, you run into the fact that your site’s domain is linked to one or more IP addresses that belong to components that could fail. Some cloud providers offer services to dynamically reassign IPs in case of such failures, or you could try a service offering high speed DNS failover.

To be able to offer a complete alternative, we still need a minimum of highly available systems, probably in a datacenter or equivalent environment, to at least handle public network access. Once traffic is inside the grid, designing architectures that offer fault tolerance using a collection of nodes run by newbies on salvaged hardware in their garages is totally possible.

That said, it’s a particular paradigm to design in this way. Many cloud users just want to spin up a VM, install some software, and let it run. This VM has volatile state in the RAM which will never be recoverable in the event of a sudden power outage, and non-volatile state on disk which could be synced elsewhere and recovered (using QSFS for example). Migrating this VM to another node and restoring its operation somewhat gracefully should be possible, but it won’t be seamless.

The grid is the only decentralized cloud network I know of that actually gives users this capability. Flux and Akash only support the execution of containers, which are not expected to retain state. Akash now has some limited persistent data support, but in general, these environments expect users to take responsibility for storing data safely elsewhere. The solutions must be designed as containerized and if it will ingest data, also connect to a storage service that handles concurrent connections.

Zero OS has native container support (actually micro VMs) and can provide self healing redundancy for containerized workloads with self healing storage capabilities on the same network, with some more work. TF Chain ultimately acts as the backstop in the “failure chain”, as a decentralized highly available replicated database to coordinate node activities.

So why isn’t this top priority? Well, for one, it’s already possible to run containerized workloads in a redundant manner on the grid using Kubernetes, if you know what you’re doing. Secondly, a network that allows me to provision a VM and start hacking at the command line, perhaps following any of countless tutorials online that start with “find yourself an Ubuntu VPS”, is way more interesting than a network that only supports containers, even with the fact of single point of failure.

I think the world generally agrees, because we have way more interest in using the grid since the generic VM feature was introduced to expand the offerings beyond Kubernetes VMs and native containers. So that is to say that I think the course of development so far makes since in helping to grow our grid utilization community and get broader testing for core features. There are plenty of fun and useful things you can do with capacity that doesn’t have 99.9% uptime (dev and test workloads are actually a big market). For the rest, there are gold certified nodes and Kubernetes, for now.

Thanks all for responding to my message.

First, for those interested, I’m in Sri Lanka.

Let me respond specifically to some of the responses I see:

i had already asked the question about uptime and bandwidth requirements that would not be possible against western standards and i was told this would never be penalised.

The earnings of the farmers is not my point here. Offering a relevant service to paying consumers (the buyers of capacity) is my point (see further). If consumers start buying the TF product, all will be well with for the Farmers.

don’t blame you for assuming that features like this already existing in the ThreeFold architecture. Especially older versions of the wiki were a little fuzzy about the current state of the tech versus what’s on the roadmap

This really makes me wonder how this info came into the old Wiki’s in the first place. If reality has catch’ed up with the dreams, then your statement “It turns out that building the foundation to enable autonomous and self healing applications took a while” is somewhat of an understatement as the ‘self healing / autonomous’ properties of the Grid where presented over 5 years ago as a more or less a current reality. I can live with that statement only when the dream is still alive.

“The short answer is that systems with these properties are tricky to build.”.

Ehm, yes… But if you want to compete with established (always innovating) centralised Cloud Providers AND you want ‘to bring the Internet to those currently excluded in the Third World’ AND you want to serve communities with low levels of IT expertise AND offer paid for services with no-one responsible when things go wrong, you really have no other choice than to offer absolute immunity of workloads against disappearing nodes in an autonomous, automagical way.

Say you have a server hosting a website. You add another server for redundancy. Now you need a component (load balancer) that knows about both servers and routes traffic to and from them. But then you need a redundant load balancer. At some point, you run into the fact that your site’s domain is linked to one or more IP addresses that belong to components that could fail. Some cloud providers offer services to dynamically reassign IPs in case of such failures, or you could try a service offering high speed DNS failover.

My point is not about guaranteeing 100% uptime during WW3. My point is about offering market conform service levels for consumers of the grid capacity. I have a totally unimportant website, but 98% uptime means almost 15 hour of downtime… So at least dozens of emails from customers asking why my website is down…

98% uptime + a real risk that my deployment is lost + nobody accountable is just not a sellable proposition for any (production) environment. I’m sure there are applications that don’t require high availability, but -hey- we are talking about ‘The New Internet’ not? Not something worse than we had a three decades ago.

Once traffic is inside the grid, designing architectures that offer fault tolerance using a collection of nodes run by newbies on salvaged hardware in their garages is totally possible.

Realising this would be, for me, an absolute priority to make TF a project that can meet its ambitions. I’m happy to hear this is technically possible. If the service you offer relies on de-facto unreliable endpoints (as in, not under your direct control), then ‘the tech’ should provide seamless reliability from the consumer point of view.

Zero OS has native container support (actually micro VMs) and can provide self healing redundancy for containerized workloads with self healing storage capabilities on the same network.

If you mean on the same LAN, then this is perfectly useless. The ‘self healing’ properties should be Grid wide (as advertised). This solely due to the nature of the threads present in the global farming model.

So why isn’t this top priority? Well, for one, it’s already possible to run containerized workloads in a redundant manner on the grid using Kubernetes, if you know what you’re doing.

Accepting the reality of un-reliable nodes (inherent to the farmer model), redundancy becomes and inherent requirement, not a future. If TF wants to represent itself as a distributed AWS (for third world countries) of course.

When I started my interest in TF, I had my experiences with Folding@Home in mind. A complicated workload running -distributed- on over 4 million computers worldwide. As the workload just runs as an app, nodes constantly turn on and off. Yet, the workload persists, forming a exascale supercomputer which is the 7 fastest super-computer in the world.

Something like this I had in mind for TF, but then as an exascale datacenter (but as said, I’m stupid).

Secondly, a network that allows me to provision a VM and start hacking at the command line, perhaps following any of countless tutorials online that start with “find yourself an Ubuntu VPS” ,is way more interesting than a network that only supports containers, even with the fact of single point of failure.

For paying consumers, planning to deploy real world workload on the Grid?

I think the world generally agrees, because we have way more interest in using the grid since the generic VM feature was introduced

How many ‘real world’ applications are running on the Grid now? How many are running on nodes maintained by consumer hosted nodes vs nodes hosted in datacenters?

This reminds me about the collaboration with ownCloud for example. This collaboration was announced a year ago now, promising thousands of users of TF capacity shotrly. What happened to this? Is it maybe the case that the current solution TF offers does not meet MVP requirements of third parties? Does the World really agree?

I don’t think you can say anything about how ‘the World’ thinks about TF yet.

For the rest, there are gold certified nodes and Kubernetes, for now.

Gold Certified nodes will essentially have the same problems as non-certified nodes (see my original massage). Besides this, the growth of the Grid will be seriously hampered if certification becomes the norm for farmers in to order attract serious workloads (so income) of the grid.

Instead of focussing on certification, I would focus on the Autonomous Self Healing. If that is technically possible at all, implementation of this will make discussions about certification, token-locks up, third World deployment and token value all a thing of the past.

Please understand that I don’t want to sound over-critical. It’s just my contribution to this project self-reflection about the need to offer solutions that can compete with centralised alternatives.

For years I’m waiting for a proof of concept (a real world company deploying real world workloads for which it pays real world money), but I keep seeing fundamental problems with fundamental issues popping up.

The TF project has the potential to offer something better as sliced bread. However, IMHO, the ‘self-healing’ (only possible with true distribution) properties should be re-prioritised as a ‘must-have’ in order to make TF the ‘new internet by and for the people’.

The ‘for now’ part at the end of your last sentence of the message of Scott gives hope, so I do hope that the initial promise of TF is still in the works.

With Love,
Aernoud

3 Likes

For some reason I expected a response to my post from @scott and others, but discussions don’t seem to be running very deep on this forum, and people are back to what they where doing,

Could also be that my post is too long for the twitter / tik-tok generation of course.

Anyway, I will then assumes that everyone agrees with what I wrote and that TF will act accordingly :slight_smile:

Hi Arnoud, although I don’t have tik-tok and Twitter, in my case it’s long. That is no blame towards you, but i have taken so much on my plate that i have to make choices on what ‘battles’ i choose. As the community grows more and more information is coming out. I hope TF will still find a moment to respond.

Don’t worry Aernoud, I didn’t forget about your message and I also don’t have a Tik-Tok account :slight_smile:

I was writing a long reply addressing your points individually. Instead, I’ll just say a few things. Most importantly, we agree that self healing is important for the Grid and this is a good time to work on it. I’ve been thinking a lot about how to make this happen and will write more about that soon.

However, I also want to set proper expectations. Eliminating the need for some amount of dependable high uptime nodes (or at least farms) is possible, but I don’t think it’s possible without some mass adoption of something like Planetary Network and a next generation DNS service. That’s solidly “new internet” territory. Right now, we have a system that’s interoperable with the existing paradigm and can already meet the needs of serious “paying customers”.

Folding@Home and Presearch are actually both examples of networks that use distributed computing that doesn’t need to be highly available, coordinated by centralized servers running in data centers. Folding is sponsored by AWS, and without some high capacity reliable servers, all the compute at home is worthless. Presearch plans to hand its gateway functionality off to node operators, and I expect those operators will require lots of bandwidth and public IPs. I.e., they’ll be best served by nodes in data centers.

I’ll finally mention that I recently found some very interesting work done to make Cosmos ecosystem blockchain nodes run in a high availability configuration consisting of several independent nodes, which share signing responsibilities. This is perfectly well suited to the Grid in its existing state. My point being that while adding features to the Grid is an important part of the equation, so is the independent creation of architectures and paradigms that rule out central points of failure on their own.

Dear Scott,

Thanks.

My main concern in is that the high availability thing has become a topic only now. I thought this was (and should be) integral to the whole ecosystem. A few servers hosted in one or more more high availability environments to make the whole Grid independent is ok of course. I guess even the wildest distributed system needs some hierarchy (like DNS).

I think you miss my point with Folding@ example (actually you confirming it). It is just a fact that the farmer model creates low availability end-points. As said, if this is the case, then the tech needs to create the redundancy in order to become a commercially viable alternative.

From the point of view of the clients of Folding@, the service remains intact even when nodes pop in- and out of existence. The availability of the nodes is not a requirement, the availability of the service at the top level is.

Without a high level of self healing, the TF Grid will not become the new Internet by the people for the people, but just the Internet for nerds by nerds. So, very limited usability when it comes to commercially interesting use cases.

IMHO.

What about ownCloud? I guess redundant storage should be available now with QSFS not? I’m curious to learn why this is not yet deployed.

Aernoud

Correct. The “continuous” operation of the Folding@ application is based on it being a distributed application. Large problems are broken down into smaller problems and smaller ones are broken down into tiny ones. Each node solves a tiny problem, and once all the tiny problems are done (failed nodes will be replaced by others to repeat the same tiny problem task) reconstruction of the small problem solutions will commence. When all small problems are done (the same principle applies, failed small problem nodes will be replaced with others and the problem solving of that small problem will recommence). Once all the small problems are solved, the large problem (solution) can be constructed.

This is a very different application than let’s say running a website. The smartness of that distributed problem solving is baked into a protocol that governs the solution process, and not in the underlying “plumbing”. Plumbing needs to be as simple as possible (simplicity creates efficiency, security and in the end reliability) and provide reliable building blocks for intelligent people out there to create reliable, scalable and secure solutions.

We have all the ingredients prepped and are ready for a fantastic “mis en place” cooking experience, but for this, we need chefs that have recipes to achieve.

As I know you for a very long time (we first met last century…) the “architect” element done by people to create reliable services are key to making it a success. This Project creates a fantastic different starting point for new world architects to make mind-blowing IT services everywhere, not just in well-controlled environments owned by a couple of companies on a global scale.

2 Likes

Just mentioning one point. It’s said that the farm model undeniable leads to ‘low availability nodes’. I disagree. There is a difference in what can be guaranteed and what is actually happening in real life. I know that most farmers are really striving to keep a serious uptime; my uptime has been nearly 100% since my nodes came online.

Would be interesting to see an actual statistic on this in the ‘statistics’ dashboard though (av uptime or something).

Agreed, and we should have more visibility coming into the tools like node explorer as time goes on.

I think the majority of hardware we see from farmers anecdotally is enterprise grade and built to last, if maybe a bit dated. That point above I agree with is that some places in the world simply don’t have reliable enough electricity and networking to allow individual nodes to be central points of failure. For the whole world to have autonomous and local cloud infrastructure, I agree that a paradigm shift in architecture is needed.

I think it is naive to think that the infrastructure TF offers (currently) can in any way compete with AWS or even the average centralised DC.

And where it can, you will only be able to charge low fees. Any price comparison between existing centralised solutions and TF are misleading. Yes, TF is cheaper, but, at the end it cannot compete, with the average DC (let alone AWS or Azure) on any of the key metrics and guarantees they provide.

Real clients want real SLA’s and accountability, and only when value is added value is created.

Again I’m asking you about ownCloud. Why has rollout (of 10K users) not yet started after a year of announcement? Is this because of resources, politics, infrastructure, Service Levels?

A ton of partnerships and plans have been announced the recent years, but I fail to see how any of these partnerships / plans have resulted in benefits for TF (except experiments on TF Playground). Also because nothing is communicated about things that didn’t come to fruition (and looking at all the presentations / forecasts I have from the past 6 years, there seems to be a lot).

If this is indeed a community project, the community also needs to know what failed and why.

I don’t want to be negative here, I just want to understand where the project is. I’m trying to push the project also in Sri Lanka, but I don’t even know what to push yet…

I also want to keep the community critical towards what has been achieved. And how what will be achieved in the future, will be relevant in the World. You cannot fix shortcomings by ignoring them.

Maybe the whole project is just still in a very preliminary state, but after almost 8 years I hoped we would be a bit further down the road…

2 Likes

This is a very good statement and the first evidence of real SLA’s being created is happening. Here’s an example setup:

  • large enterprise wants simple storage for archiving purpose. they have a number of requirements with regards to the geography of where data is stored and availability/uptime.
  • the service provider contacts a number of farmers (not DIY, datacenter-based farmers) in the required region and start negotiating price and Service Level Agreements (uptime and penalties being the main components).
  • service provider and farmer agree on service terms and conditions for capacity availability, price etc. and sign a contract (peer-to-peer - no ThreeFold involvement).
  • service provider procures the capacity need to create the simple storage solutions and strikes a service contract with the Enterprise in which they agree to the service terms and level needed for the Enterprise.

Et presto, a very similar delivery model is created with a completely understood chain of suppliers that all do their duty to meet the required enterprise service level.

In all honesty, this is not dissimilar at all to having a large could operator would work with internal regional P&L’s and services being internally provisioned and cross-charged.

Ok, so all is fine in lala land?

  • the service provider contacts a number of farmers (not DIY, datacenter-based farmers) in the required region and start negotiating price and Service Level Agreements (uptime and penalties being the main components).
  • service provider and farmer agree on service terms and conditions for capacity availability, price etc. and sign a contract (peer-to-peer - no ThreeFold involvement).

You are joking right?

The more a learn about this project, the less I understand it.

Your example shows there is a sharp distinction between ‘datacenter-based farmers’ and DIY.

Datacenter based (negatives)

  • No Decentralisation
  • No passive income (need to negotiate SLA’s and lobby with Service Providers)
  • Risk for Farmer (accountability, liability)
  • No Internet by the People for the People (high barrier to enter)
  • No New Internet (what is new here? This is how the market operates already).
  • Payments for Services will most likely go peer to peer (so not resulting in demand for TFT on the open markets)

What is the added value of a TF based infrastructure in this market?

I completely fail to see how the model of datacenter based farmers will connect the underserved communities to the Internet, or provide decentralisation in a fundamental way. This all is not a solution by the People for the People IMHO.

DIY based (negatives)

  • No (guaranteed) Service Levels
  • Risk for Client (deployment / data loss)
  • Low utilisation, thus added value to the ‘New Internet’ proposition
  • Complexity (in some countries impossibility) of buying/selling TFT on open markets for all parties involved
  • Low income for Farmers

What is the added value of this proposition for the DC market?

I also fail to understand how the Tokenomics will work in this two tier model as I fail to see where value is created.

I repeat what I wrote in an earlier message. It looks like this is indeed the model that TF envisions it seems:

Alternatively, TF could forget about ‘consumer level’ nodes, and only work with partners who want to invest in ‘mini-datacenters’ world-wide. This looks a bit like ‘certified farming’, but working with professional partners will keep out consumers who think this is all going to be easy money (so disappointments).

Let’s look at this from another angle:

Utilisation currently is 0.5% (of which part is from TF itself), 6 years since TF started communicating their solution is better as sliced bread.

TF has announced a number of partnerships, of which many (most I hope) where aimed to increase utilisation. Specifically OwnCloud was supposed to start with a decentralised storage option, bringing 10K clients to TF.

Please tell me why utilisation is so much lagging and why partnerships didn’t yet come to fruition (in terms of utilisation) ? (I still did not get an answer about OwnCloud).

Maybe an answer helps to understand why utilisation is so low currently.

But don’t tell me it’s because there is a ”crypto winter” as crypto and hosting markets are completely different, and your example would work fine even without the existence of crypto.

It will most like be we are not ready yet, still in beta, we first build and then sell, no budget for marketing etc.

However:

  1. t’s still unclear what, to me at least what will be build in practical terms and what will be the added value and for whom? I lost track…

  2. This is what I heard for years now. I understand the issues with resources like funds, but what is and what has been communicated to the outside World seems more like a distant dream than a current reality .

  3. It fails to explain why the ‘professional’ farmers also show very low utilisation.

3 Likes

Not saying you’re wrong, not saying you’re right. But I would try to look up the words ‘constructive feedback’ in the dictionary. Further, instead of just complaining and demanding answers to your questions, look at how much work is done by the community. It’s not all roses, but there are a few ppl out there coming with great ideas and working with the 3Fold team to build upon the existing network.

Truly the most un-constructive reply ever…

Guys, very good points and ideas on the thread. What about the main suggestion about token locks? Is this a define no, or still to be discussed. Overall I still think a partial lock of the tokes its a good idea. Especially still being at the begging. If we assume the network will grow in the nearest future wouldn’t that be more difficult to implement the lock? And also let’s say we do a 50% lock for 2 years, I’ve started my farm in February this year knowing all tokens will be locked for 2 years or 30% utilisation.
It will be a bit difficult in the beginning but overall in the long term it might help the project.

1 Like

Token lock would be suicide! Absolut stupid idea! Rewards don’t even cover costs for electricity. How can you think of vesting?!? If you have too much money… gift it!

To attract new customers is more reliable to promote the ThreeFold on many channels. We need to get bloggers, video content creators, influencers, etc. Of course affiliate program for them. Also Facebook, Quora, Reddit… Not everyone who has a good technical skills (a regular farmer) can be a good author too. We need reviews and comparison to other players in the market. Also if we need utilization as I mentioned in my other post need to seek for an opportunity to make contact with the big players to provide our service as a backup or something…

Well there are a thousand reasons for it to be low, but I am sure you will disagree with most of them so I won’t spend the time to list them. I am not here to defend any result or non result, we at Threefold are bringing technology to the world to reboot the digital fabric (Internet) we all need for out daily personal and work lives.

We as a community believe that a different way to process and store data will lead to new concepts, regional internets, digital nations that cross and span country borders, create primitives for people to create beautiful experiences that are not governed by a central service provider. We might have been our own enemy to try to look like an of anything existing to make it people understand what it is what we are trying to do.

We as a community want to bring something live that can allows people to change the way they organise their digital lives, we believe that that is meaningful and needed. Usage will come from who see that need, not someone that is happy @ AWS.

This is all unchartered territory and we are learning while we go.

Apologies for the off-topic response.

1 Like

For current discussions on Token Lock, please redirect to Token lock recap and next steps

1 Like