TFT token lock proposals [Closed]

Maybe its the time to start throwing some ideas about some different ideas for token lock and than put it to a vote.
I’ve registered my first DIY nodes in February/March this year knowing that the token will be locked for 2 Years or 30% utilisation.
We know that its not going to be the case because of the running costs.
I would like to suggest a progressive lock like this maybe:
We start with a 20% or 25% lock for 2 years
The percentage will increase every 3 months with 5% until it reaches 50% or 75%

We need to clearly communicate why this is vital to network health is important. I also think we should not enact this until we get some price recovery to 8 cents.

Locking token is such a bad idea especially at this time. People have electricity bill to pay, equipment cost to pay. If regular mining via pow doesn’t have a lock mechanism. It does not make sense for threefold to have one. A penalty for removing a server make more sense than locking token.

The main idea locking is for the token to appreciate in value. We know that we are in a bear market and the token hold it quite well in this conditions. My point of view is that from January this year the network grew considerably which also should be seen in the appreciation of the token. And I think the growth slowed down massively. It’s not very tempting to register new nodes when the price is 1/2 the registration price.
I feel that the demand for utilising the network it’s not very close that’s why I suggest locking.
At this time with the current size of the network there is approximately 4.5 million TFT tokens paid each month to the farmers.

If your main goal is to appreciate the value of the token. Locking it the worst way to appreciate value. You are delaying the inevitable. When it gets unlocked a massive dump will happen.

Farming is a bit different that PoW mining. PoW networks don’t particularly care if individual rigs come and go. All that matters is maintaining sufficient total hash rate to secure the network.

When farmers disconnect their nodes, it disrupts all workloads running on that node. Currently the only penalty we can apply is to not pay any farming rewards for a given month if nodes don’t meet an uptime threshold. This is because we are also unlike PoS networks or other capacity networks that require bonded collateral. We don’t have a way to create a penalty for taking nodes out of service without locking rewards.

Locking can help ease sell pressure without creating an inevitable “massive dump”. This is because tokens are unlocked at a set duration (two years as proposed) after nodes are registered or when they reach a utilization threshold. Tokens will unlock in a staggered fashion as nodes hit the utilization threshold. If a large percentage of the nodes that migrated to v3 at its launch don’t hit the threshold before two years is up, that would be the case where a large sell could happen.

For me though, the more important benefits are the incentive this provides for farmers who have workloads running on their nodes and the way it promotes grid stability by creating a penalty for nodes that stop farming.

I think it makes sense to start with a partial lock, and progressively increasing it is a great idea too.

We also need to take technical feasibility into account. The plan was to start locking tokens once we were minting on a chain with smart contract capabilities. On Stellar, the only option I see is placing the tokens into multisig accounts requiring both the farmer and the team to sign in order to move the tokens (as we did for vesting), which is a bit centralized for my taste.

Myth, collateral is not slashed for bad uptime in the capacity network you are thinking of. It exists only to drive demand for the token.

I was thinking also of other networks like Filecoin where slashing is a thing, but my main point is that any network with collateral requirements at least has the option to do slashing while we do not.

Please clarify if this discussion is also for current farms. I’m against any changes

I read two different reasons for locking. First is to get higher TF price. Second is to somehow make sure ppl don’t turn of their machines. I believe both reasons are false.

First the price. For sure, when farmers sell the TFT the moment they receive it, it creates sell pressure on the token and the price will go down. However, locking the coin for 2 years will only delay this process. As more farmers coming online, at some point there will be an equilibrium of tokens that fall free end of the month. Further, I believe it’s an inappropriate way of trying to increase the price artificially. The price of the token should be what it is: based on value of the project and other market conditions. Want a higher price? Get more in marketing, promote TFT and get it more out there. Price should be the result of demand and supply. Reducing the supply not always increases the price, as has been seen in other projects.

Second, the ‘fear’ of ppl turning of their machines. This doesn’t make sense to me at all but maybe I misunderstand. If I get my tokens anyway after two years, why would I still keep the machines running if I can’t pay the electricity costs (or space rental) anymore? The only way to prevent ppl from turning of their machines is to make sure it’s profitable to keep them running. By delaying the rewards for 2 years, this is hardly worth it. Nobody can predict the future…

This also brings me to my issue with the way the 95% (not enforced yet) is handled. If my ISP is down for a day during the month and I miss the 95%, I might well turn off my units for the remainder of the month to save electricity. I would rather see a ‘bonus’ after x- months for instance that if you have 95% during that period, you get a bonus of x-%. Again, make it profitable to keep the units running.

Just my 2cts…

1 Like

You should not forget to look from the grid user viewpoint. How would you look if your node goes down for 1+ day ,u are working on everyday? And if users then hear people can just pull out their node ,there will never never never be trust in the grid.

AWS has razor thin downtimes, know 95% Uptime means max 36.5 hours offline. That’s really (to) much if you ask me. You should just have second options for those things like an extra router connected to 4/5G that falls over. I never experience more than 6 hours or something down and i don’t even have fiber or something just a commercial ISP. I doubt you will even need an extra router. Let stand more than 36.5 hours down. Than i wouldn’t even trust your ISP anymore.

But indeed you’re right if that “miracle” happend you would have no reason to keep as of right now. But there also coming with boosters for usage this will motivate you to get a better overall enviremont. Also note (i don’t what your running) but if you use 1 power switcher for powering let’ say 10 nodes you should question how long it will hold. I take this responsibility myself and i think TF must do it also like this. TF has none litterly NONE hold over people running at home right now. No stacking required, no uptime required (Except for gold and certfified) no punishment nothing. You have to agree that this is not a trustable network right now altough everyone keeps nodes up to get coins etc there and supporting the project there is no big reason.

Please look at the long term options we have, this no 95% is not heldable long term. I mean will you right now deploy a node on your own 3node? Regarding speeds, durability, performance. If the awnser is not yes or maybe i think you should question what you can improve about your farm. And boosters etc will come. I’d like to still get tokens in a year, you too?

We are almost 7 years into this project, and I’m quite disappointed that this ‘fundamental’ discussion has not been addressed / finished earlier.

For long I have misunderstood this project completely.

I thought TF was about ‘distributed’ computing, offering ‘self healing’ as a core principle. Implemented by a custom OS to facilitate this.

Only recently (yes, I’m stupid) I clearly realised that workloads are not distributed at all, but run on single computers (nodes) in a consumer (home) environment. So yes, when one farmer decides to switch of his node, or (much more likely) external causes make the node go down / unreachable, all workloads stop functioning.

And the buyer of the capacity (our clients) has nobody to call… When the Node will be taken of the Grid entirely for whatever reason (e.g Flux promises an airdrop), the -client- can start from scratch with his deployment.

Note that these ‘clients’ are the only actors who will eventually increase the value of the network, thus the TFT. So, if TF wants to grow in value, all eyes need to be on these clients (as TF does everything to prevent speculation).

I think that we need to think about this ‘self healing’ proposition again. I’m an idiot, and (therefore) I always thought TF to be something like TCP/IP, Raid Drives, CPU Switchers or Torrent. So, inherently redundant.

As I’m stupid, I thought that dApps where apps that kept running even when one of the resources they need (Compute, Memory, Network) ‘disappears’, automatically failing over to similar resources available on the TF Grid. Hence ‘distributed’…

Only recently I fully realised that clients are not buying capacity on the TF Grid, but are basically buying capacity on a single node hosted by (mostly) beginners (the amount of expertise of the farmers will be inversely proportional to the growth of the grid), with a strong incentive to use the cheapest (so vulnerable) hardware available.

With this, (when no extra measures are taken), every node now becomes a single point of failure from the perspective of the client who wants to deploy workloads.

Yes, this is also true in a DC when deploying on a single non-redundant system, but there I get relevant uptimes (99.9% is sort of standard), a phone number, and a SLA with penalties for the DC if the Service Levels are not met (which for big clients in many cases include reimbursement of damages). Plus, I can opt for further redundancy without work on my side (raid, cpu switching, real-time backups etc.). Just click, and pay the bill.

I live in a third world country, which is the main target for TF to bring ‘abundance’. However, we have power-cuts every day (normal in third world countries), and with this I will never be able to maintain a reasonable uptime, so investing in a Node does not make sense. Unless of course I make everything redundant, so basically starting a private DC.

I think this all challenges the TF proposition, and will keep people from actually using and growing the grid.

What I expected TF Grid to be might not be technically possible, but I think monetary incentives (we don’t pay you until you perform, so lockup) to keep ‘uptime high’ will just not work.

Why? Because in most cases the cause of downtime is not the ‘will’ of the farmer, but outside influences. To keep uptime on a level relevant for our clients (so 99.9% at least to be competitive with centralised solutions), the nodes should be hosted in a datacenter to achieve this, so we are back where we started. However, nodes themselves also should contain redundant hardware to make it really competitive.

Dividing the Grid into a non-certified part (basically useless for serious clients) and certified (basically very expensive for farmers) also will not spur rapid growth IMHO.

You are are all very clever people. If what I say above makes any sense (again I’m stupid), shouldn’t we think about real distribution of workloads, e.g. not make them dependent on whether a certain node is available or not? Or making backup / QSFS / failover / parallel / running of containers, whatever… (again I’m an idiot) a standard feature?

If that could be done, then the whole discussion about lockups becomes irrelevant and each farmer just gets paid pro-rato the uptime of their node.

Alternatively, TF could forget about ‘consumer level’ nodes, and only work with partners who want to invest in ‘mini-datacenters’ world-wide. This looks a bit like ‘certified farming’, but working with professional partners will keep out consumers who think this is all going to be easy money (so disappointments).

4 Likes

It’s too much at the moment to reply to each if your points from my mobile; but i see many of your points. However, i think it’s mainly due to this project still being a bit misty to the ‘stupid’ among us ( i count myself in, assuming you mean less knowledgeable in this field). I met with the Belgian team last week and was thrilled to hear what’s being worked on, which may resolve many of your concerns. Also, they acknowledged that it’s an organic process which meant that along the way they have made different choices if new ideas and information turned up that was decided to be even better to meet the TF values.

I’m wondering if there’s already a place, wiki, post etc. that would give us an overview like below, which would maybe help us understanding (and being able to explain and sell) this project better, and that is to state the different services that this new grid is used for, the unique solutions for these services and the current development state they’re in. For instance;

  • cloud storage; self healing through…, Redundant through… Current status: …
  • Apps; self healing through…
  • streaming like zoom; self healing through…
  • websites/webhosting…
    Etc.

So, a list of the different usages of the internet and what unique solutions this new grid offers for all of them.

Having a business in West Africa as well, i had already asked the question about uptime and bandwidth requirements that would not be possible against western standards and i was told this would never be penalised.

Last but not least; I’m curious which country you live in…

Wise man! Wise words!!

Totally agree! I also thought the workloads would be distributed over the network and be self-healing. In my opioning this would be a unique selling point of TF if this could be applied

This has been a hot topic in the community lately, and I’ve been meaning to write a post about where we are and what’s possible. I don’t blame you for assuming that features like this already existing in the ThreeFold architecture. Especially older versions of the wiki were a little fuzzy about the current state of the tech versus what’s on the roadmap.

It turns out that building the foundation to enable autonomous and self healing applications took a while, but I think we’re now in a very solid position to talk about how to move these features forward.

The short answer is that systems with these properties are tricky to build. Removing one single point of failure tends to create another one upstream. Say you have a server hosting a website. You add another server for redundancy. Now you need a component (load balancer) that knows about both servers and routes traffic to and from them. But then you need a redundant load balancer. At some point, you run into the fact that your site’s domain is linked to one or more IP addresses that belong to components that could fail. Some cloud providers offer services to dynamically reassign IPs in case of such failures, or you could try a service offering high speed DNS failover.

To be able to offer a complete alternative, we still need a minimum of highly available systems, probably in a datacenter or equivalent environment, to at least handle public network access. Once traffic is inside the grid, designing architectures that offer fault tolerance using a collection of nodes run by newbies on salvaged hardware in their garages is totally possible.

That said, it’s a particular paradigm to design in this way. Many cloud users just want to spin up a VM, install some software, and let it run. This VM has volatile state in the RAM which will never be recoverable in the event of a sudden power outage, and non-volatile state on disk which could be synced elsewhere and recovered (using QSFS for example). Migrating this VM to another node and restoring its operation somewhat gracefully should be possible, but it won’t be seamless.

The grid is the only decentralized cloud network I know of that actually gives users this capability. Flux and Akash only support the execution of containers, which are not expected to retain state. Akash now has some limited persistent data support, but in general, these environments expect users to take responsibility for storing data safely elsewhere. The solutions must be designed as containerized and if it will ingest data, also connect to a storage service that handles concurrent connections.

Zero OS has native container support (actually micro VMs) and can provide self healing redundancy for containerized workloads with self healing storage capabilities on the same network, with some more work. TF Chain ultimately acts as the backstop in the “failure chain”, as a decentralized highly available replicated database to coordinate node activities.

So why isn’t this top priority? Well, for one, it’s already possible to run containerized workloads in a redundant manner on the grid using Kubernetes, if you know what you’re doing. Secondly, a network that allows me to provision a VM and start hacking at the command line, perhaps following any of countless tutorials online that start with “find yourself an Ubuntu VPS”, is way more interesting than a network that only supports containers, even with the fact of single point of failure.

I think the world generally agrees, because we have way more interest in using the grid since the generic VM feature was introduced to expand the offerings beyond Kubernetes VMs and native containers. So that is to say that I think the course of development so far makes since in helping to grow our grid utilization community and get broader testing for core features. There are plenty of fun and useful things you can do with capacity that doesn’t have 99.9% uptime (dev and test workloads are actually a big market). For the rest, there are gold certified nodes and Kubernetes, for now.

Thanks all for responding to my message.

First, for those interested, I’m in Sri Lanka.

Let me respond specifically to some of the responses I see:

i had already asked the question about uptime and bandwidth requirements that would not be possible against western standards and i was told this would never be penalised.

The earnings of the farmers is not my point here. Offering a relevant service to paying consumers (the buyers of capacity) is my point (see further). If consumers start buying the TF product, all will be well with for the Farmers.

don’t blame you for assuming that features like this already existing in the ThreeFold architecture. Especially older versions of the wiki were a little fuzzy about the current state of the tech versus what’s on the roadmap

This really makes me wonder how this info came into the old Wiki’s in the first place. If reality has catch’ed up with the dreams, then your statement “It turns out that building the foundation to enable autonomous and self healing applications took a while” is somewhat of an understatement as the ‘self healing / autonomous’ properties of the Grid where presented over 5 years ago as a more or less a current reality. I can live with that statement only when the dream is still alive.

“The short answer is that systems with these properties are tricky to build.”.

Ehm, yes… But if you want to compete with established (always innovating) centralised Cloud Providers AND you want ‘to bring the Internet to those currently excluded in the Third World’ AND you want to serve communities with low levels of IT expertise AND offer paid for services with no-one responsible when things go wrong, you really have no other choice than to offer absolute immunity of workloads against disappearing nodes in an autonomous, automagical way.

Say you have a server hosting a website. You add another server for redundancy. Now you need a component (load balancer) that knows about both servers and routes traffic to and from them. But then you need a redundant load balancer. At some point, you run into the fact that your site’s domain is linked to one or more IP addresses that belong to components that could fail. Some cloud providers offer services to dynamically reassign IPs in case of such failures, or you could try a service offering high speed DNS failover.

My point is not about guaranteeing 100% uptime during WW3. My point is about offering market conform service levels for consumers of the grid capacity. I have a totally unimportant website, but 98% uptime means almost 15 hour of downtime… So at least dozens of emails from customers asking why my website is down…

98% uptime + a real risk that my deployment is lost + nobody accountable is just not a sellable proposition for any (production) environment. I’m sure there are applications that don’t require high availability, but -hey- we are talking about ‘The New Internet’ not? Not something worse than we had a three decades ago.

Once traffic is inside the grid, designing architectures that offer fault tolerance using a collection of nodes run by newbies on salvaged hardware in their garages is totally possible.

Realising this would be, for me, an absolute priority to make TF a project that can meet its ambitions. I’m happy to hear this is technically possible. If the service you offer relies on de-facto unreliable endpoints (as in, not under your direct control), then ‘the tech’ should provide seamless reliability from the consumer point of view.

Zero OS has native container support (actually micro VMs) and can provide self healing redundancy for containerized workloads with self healing storage capabilities on the same network.

If you mean on the same LAN, then this is perfectly useless. The ‘self healing’ properties should be Grid wide (as advertised). This solely due to the nature of the threads present in the global farming model.

So why isn’t this top priority? Well, for one, it’s already possible to run containerized workloads in a redundant manner on the grid using Kubernetes, if you know what you’re doing.

Accepting the reality of un-reliable nodes (inherent to the farmer model), redundancy becomes and inherent requirement, not a future. If TF wants to represent itself as a distributed AWS (for third world countries) of course.

When I started my interest in TF, I had my experiences with Folding@Home in mind. A complicated workload running -distributed- on over 4 million computers worldwide. As the workload just runs as an app, nodes constantly turn on and off. Yet, the workload persists, forming a exascale supercomputer which is the 7 fastest super-computer in the world.

Something like this I had in mind for TF, but then as an exascale datacenter (but as said, I’m stupid).

Secondly, a network that allows me to provision a VM and start hacking at the command line, perhaps following any of countless tutorials online that start with “find yourself an Ubuntu VPS” ,is way more interesting than a network that only supports containers, even with the fact of single point of failure.

For paying consumers, planning to deploy real world workload on the Grid?

I think the world generally agrees, because we have way more interest in using the grid since the generic VM feature was introduced

How many ‘real world’ applications are running on the Grid now? How many are running on nodes maintained by consumer hosted nodes vs nodes hosted in datacenters?

This reminds me about the collaboration with ownCloud for example. This collaboration was announced a year ago now, promising thousands of users of TF capacity shotrly. What happened to this? Is it maybe the case that the current solution TF offers does not meet MVP requirements of third parties? Does the World really agree?

I don’t think you can say anything about how ‘the World’ thinks about TF yet.

For the rest, there are gold certified nodes and Kubernetes, for now.

Gold Certified nodes will essentially have the same problems as non-certified nodes (see my original massage). Besides this, the growth of the Grid will be seriously hampered if certification becomes the norm for farmers in to order attract serious workloads (so income) of the grid.

Instead of focussing on certification, I would focus on the Autonomous Self Healing. If that is technically possible at all, implementation of this will make discussions about certification, token-locks up, third World deployment and token value all a thing of the past.

Please understand that I don’t want to sound over-critical. It’s just my contribution to this project self-reflection about the need to offer solutions that can compete with centralised alternatives.

For years I’m waiting for a proof of concept (a real world company deploying real world workloads for which it pays real world money), but I keep seeing fundamental problems with fundamental issues popping up.

The TF project has the potential to offer something better as sliced bread. However, IMHO, the ‘self-healing’ (only possible with true distribution) properties should be re-prioritised as a ‘must-have’ in order to make TF the ‘new internet by and for the people’.

The ‘for now’ part at the end of your last sentence of the message of Scott gives hope, so I do hope that the initial promise of TF is still in the works.

With Love,
Aernoud

3 Likes

For some reason I expected a response to my post from @scott and others, but discussions don’t seem to be running very deep on this forum, and people are back to what they where doing,

Could also be that my post is too long for the twitter / tik-tok generation of course.

Anyway, I will then assumes that everyone agrees with what I wrote and that TF will act accordingly :slight_smile:

Hi Arnoud, although I don’t have tik-tok and Twitter, in my case it’s long. That is no blame towards you, but i have taken so much on my plate that i have to make choices on what ‘battles’ i choose. As the community grows more and more information is coming out. I hope TF will still find a moment to respond.

Don’t worry Aernoud, I didn’t forget about your message and I also don’t have a Tik-Tok account :slight_smile:

I was writing a long reply addressing your points individually. Instead, I’ll just say a few things. Most importantly, we agree that self healing is important for the Grid and this is a good time to work on it. I’ve been thinking a lot about how to make this happen and will write more about that soon.

However, I also want to set proper expectations. Eliminating the need for some amount of dependable high uptime nodes (or at least farms) is possible, but I don’t think it’s possible without some mass adoption of something like Planetary Network and a next generation DNS service. That’s solidly “new internet” territory. Right now, we have a system that’s interoperable with the existing paradigm and can already meet the needs of serious “paying customers”.

Folding@Home and Presearch are actually both examples of networks that use distributed computing that doesn’t need to be highly available, coordinated by centralized servers running in data centers. Folding is sponsored by AWS, and without some high capacity reliable servers, all the compute at home is worthless. Presearch plans to hand its gateway functionality off to node operators, and I expect those operators will require lots of bandwidth and public IPs. I.e., they’ll be best served by nodes in data centers.

I’ll finally mention that I recently found some very interesting work done to make Cosmos ecosystem blockchain nodes run in a high availability configuration consisting of several independent nodes, which share signing responsibilities. This is perfectly well suited to the Grid in its existing state. My point being that while adding features to the Grid is an important part of the equation, so is the independent creation of architectures and paradigms that rule out central points of failure on their own.