AWS pulled the plug on Parler.
CEO Matze states Parler will not be available anymore on ‘this’ Internet.
What if Parler comes to the TF distributed Internet?
AWS pulled the plug on Parler.
CEO Matze states Parler will not be available anymore on ‘this’ Internet.
What if Parler comes to the TF distributed Internet?
I’ll be the first to admit that I wasn’t on Parler to see what was actually going on there. Glenn Greenwald gives a compelling report, based on his investigation, that Facebook was much more responsible for the recent debacle at the US capitol than Parler. See his recent tweets, including this one.
The bigger question, in my mind, is how can we create environments free from centralized censorship where efforts to organize harmful actions can be meaningfully contained?
At least, Parler would be a significant new customer for the grid Whether such a thing could be restricted or stopped given the current architecture isn’t something I’m 100% sure about. It seems that gateway nodes would have the greatest level of discernment in that regard, if any exists within the system.
The lesson to be learned here IMHO is that any platform or technology spreading content which cannot be regulated, is doomed to fail sooner or later.
You ask the right question indeed.
“how can we create environments free from centralised censorship where efforts to organise harmful actions can be meaningfully contained ?”
The first ‘realisation’ should be that this is indeed a question to be answered. As long as moderation of content is ‘impossible’ (cannot or wan’t not) on distributed networks, these networks will face the scrutiny of society and eventually the authorities sooner or later.
The answer of ‘society’ to this question is written in the constitution of the diverse countries. The follow up question is then; how can we enforce something like a constitution on the TF-Grid?
Everyone who hides behind the rights of “free speech” or laws like the Section 230 in the US does not understand what is going on.
Although I’m all for big customers on the TF-Grid, the headline “ Parler moves to TF Grid ” is not a headline I want to see. Yes, FB might be just as guilty, but also FB is under scrutiny, and they have so far been able to avoid consequences because they are able to moderate (and now do so).
My problem is I don’t see any entry or possibility within the TF-Grid to regulate or moderate content. Farmers don’t even know what content is on their nodes, while everybody with access to a 3Bot can unleash fake-news hell on the grid, including child-porn, silk-roads etc.
Or am I wrong here? (hope so).
At some point the authorities will turn to TF-Tech, the company who licenses out the technology to TF-Grid. Not good…
What appears to be a biggest benefit of the TF-Grid might end up becoming it biggest threat.
However, we are smart people. There must be a solution here. Society and/or Governments will demand this sooner or later.
Btw, I’m also all for solving problems when they actually occur, but having the ‘issues’ in our mind might be a good thing for any technical and other steps forward for the TF-Grid.
Vitalik already made a statement: https://decrypt.co/53890/parler-vitalik-buterin-ethereum
Any 3Bots in the make that can moderate users activity? (although I don’t see how user bots fighting bots will be a solution.)
Maybe the creation of ‘Super 3Bot’s’ (only to be approved and activated) by the Threefold Wisdom Council is an idea. Only these Bot’s will have the executing permission to moderate content/activities on the TF-Grid according to a certain rule-set (smart-contract version of a constitution).
The ‘enforcing’ of a constitution by smart-contracts in a network would be cool.
I love this discussion. Thanks for bringing it to the public forum, @aernoud. It’s something I’ve talked about with a few people in private over the last days. By the way, it looks like Parler found a new hosting solution in the meantime but still it’s a nice conversation and there are big implications for the future of ThreeFold.
From a purely emotional standpoint, I wouldn’t have wanted to see Parler on the grid. I’ll also admit I never used the tool.
Taking a step back, I agree with many things both you & @scott brought up, first & foremost being the point Scott made & Aernoud echo’d: "how can we create environments free from centralized censorship where efforts to organize harmful actions can be meaningfully contained?"
I’m with you Aernoud, I believe that some type of at least somewhat decentralized moderation / regulation system is needed. A decentralized environment without such a system is a problem for a number of reasons.
Is it a constitution on the grid? Is it a series of peer-to-peer review mechanisms? Both? Both & more?
Re: the technical solution or what’s currently being thought of, I’d be curious on other voices who are more close to the tech to bring some clarity and thoughts here. I like a lot of your ideas.
Anyway, I love your energy and thank you again for bringing this discussion to the forum. I agree, we are smart people and there must be a solution here, so let’s come together to bring one forward!
Very nice discussion indeed. I agree with most of the points in this thread and would love to read what statement the TF team has on this complex subject !
I’ve just read a news article yesterday where Matze states he’s worried the app might never return.
I agree with the above, that platforms should be free of centralized censorship. But will this ever be possible, as I feel moderation has a lot to do with a set of values. For instance, what I find very disturbing or bordering hate speech might be completely normal for others…
What would be the solution for it?
A decentralized panel of “outsiders” for each platform that does have a voice. Somewhat the same as we have the guardians at ThreeFold? I think peer-reviews won’t be helpful here, as peers operate mostly in the same set of values. And might both love the same hate-speech or whatever…
This is probably one of the trickiest questions to decentralization vs censorship.
Well said - I would advocate to stay as close as possible to the physical reality on these digital platforms. Yes the speed at which ideas can be shared is much higher, the time it takes to get people organized around topics is enormous.
But what is causing a lot of the extreme positions people take on online platforms is the level of anonymity that exists on these platforms… Imagine saying what some people post on forums to a random person you cross in a high street / mall. I am pretty sure 90% of the people that make the most ridiculous (my words) statements online do not have to guts to do it to another persons face. The possibility to be anonymous is causing most of this misery.
So I think the answer lies in creating reputation, known identities and with that accountability online as much as in real life. I am convinced that a very large portion of this craziness is ceasing to exist.
Yes! Verifiability of identity is key. Accountability. Great point.
Accountability is a good point, but until the digital twin has enough features to be able to use it on social platforms, the problem still exists, and TF does not provide actually tools to moderate
For example in France (to be verified), I believe the hoster has the responsability to close access to the specific hosted platform if it is used for criminal behaviors. Actually if I host such persons on my TF farm, I would have no other choice to cut internet connectivity for the whole farm
FB and other platforms delete thousands of fake accounts each day.
Extensive KYC might be needed, but is a burden for all involved.
But accountability is a key ingredient I feel yes.
KYC is one way of doing it but maybe a more natural way is normal reputation. Like referals, I know you so if you tell me that a particular person is a good guy (or girl) I take you word for it and give her or him my trust (until that trust is damaged). That has no single external verifier but creates group or community verification.
Also - and this is Kristof wisdom, not mine: on current platforms, you start with zero credibility and you have to proof that you are credible. Which is an open invitation to try to fob the system. What if we turn that around as well: you start with 100% credibility and then when you do not behave, credibility is taken away. Makes a hell of a difference what actions people take…
Ok, so how is this (going to be) implemented on the TF Grid?
It is great to see this discussion beginning! Jennifer and I had a brief discussion on this topic with former ambassador Richard Walker in California when we met up with him several years ago. He raised these concerns regarding TF at that time. Here is my thought on the topic:
For peer-to-peer communication, I think they should be totally unregulated, just as you and me have a private conversation. There is a worse danger of malicious censorship if you eventually have centralized control of discussions (consider if King Trump were allowed to control all content) than allowing bad actors to have their harmful discussions among themselves. However, the problem arises when the bad content is open to free access. If an application like Parler, open to anyone, is implemented on the TF grid, then it should bear the responsibility of monitoring content and under censorship if so required by overall consensus. The grid itself should not be held responsible. But I think we must insist that there be anonymity and privacy inherent on the TF grid. In other words, if you build an app that runs on the grid and disseminates misinformation or undergoes criminal activity, then it must be subject to laws, not the grid or its inherently private structure. This will require cooperation of governments, of course. The problems we have seen in recent years arise principally from a few bad actors in control of the public discussion (i.e. Trump, Bolsonaro).
How is this even the beginning of a solution? Basically you are saying everybody on the Grid should behave. We all know that is not going to happen… so we need to be able to moderate, name, shame and punish.
Anonymity is add odds with this.
If you are on an airport its of course ok to shout “bomb bomb” to your partner when you are alone with her, but it’s not ok to shout “bomb bomb” in the public space.
If idiot are going to use the TF Grid to organise crimes, or to publish information prohibited by law, ‘something’ or ‘someone’ needs to take action. And The Grid should be enabled to do so.
Rating is also not a solution. If I now hire capacity to store and spread child-porn, how is rating to prevent that? When the community gets alerted about the crime, then it’s already too late.
FB uses automated algorithms to hunt for unacceptable content and accounts. I suggested ‘super bots’ acting according to smart contracts, able to remove malicious data and to disable the 3Bots spreading them.
Should this and can this be done?
I am not aware of how this is going to be implemented, I just know that Kristof and engineers speak a lot about this. We think it’s time to do things in a different way and this is one of those things. I’ll see if I can gather some thoughts from him (or invite him to step in) and give some perspective on this.
You mentioned something tremendously important here, and that basically embodies the concept of equality within a social network. Every individual starts equally with100% credibility/verifiability/ethics score, which diminishes whenever you act against the ethical rules. A diminishment occurring only when your peers and the general community have deemed your contributions/interactions to be going sideways (scamming, hate speech, impersonations, etc… things, unfortunately, we see every day on certain platforms). But also, give you credits/endorsements when you contribute and create something appreciated, respected, or agreed upon by a different set of peers, following a different set of values, but following the same universal ethics.
A model based on these designs, anyone who desires shall keep anonymity, but having in mind that his actions, behaviors, contributions, and reactions are and will always be equally and exhaustively peer-reviewed, and quantitatively judged - Anonymity makes people more honest, transparent, and truthful, but very aggressive and scammy sometimes, oh boy!
I reckon that an incentivized social protocol that empowers its community of peers to equally self-sustain, self-moderate, self-punish, self-censor when crossing the limits, and that embeds the " The Public decides" notion as the engine. - The public as of anyone on the network equally represented ( quote from Edward Snowden ) maybe…would be the 0 to 1 the social media industry, and the world needs. There are so many other components that can improve the state of social networks that we can address, but we will keep that for another post.
As of Amazon Cloud, kicking Parler out of their platforms with a 24-hour notice with a strong media and public presence, oh that’s cold as ice. was it the best thing to do? probably not, the censorship and dictatorship that AWS, google cloud, and others have displayed is genuinely alarming. Does Parler need to be better moderated, Absolutely yes! but in a way that respects free-speech, and equality. How to moderate it and exclude centralization out of this equation? to have it self-moderated by a different set of peers sharing a different set of values, but following the same universal ethics.
These are very valuable reflections, which are indeed worth some discussion.
I don’t follow you however on the statement Threefold Tech would be kept liable for non-moderated content. It’s not the cloud provider of Parler that will be liable for illegal content hosted on its infrastructure. In my opinion issues in a decentralized context require different solutions than centralized ones, when it comes to respect of the law and rules to be enforced. I’ll explain myself.
This topic has in fact the same problems and requires the same solution as value transfer: it is unlawful to send money to criminal/terrorist organisations or to someone residing in a sanctioned country (North Korea, …). In a centralized world, banks as trusted intermediary have a very important role there, blocking transactions that go to illegal organisation, as there are heavy fines pending on them if they don’t fulfil their role properly. However, in a decentralized setup, there is no middleman. And lawmakers are still puzzled how to enforce the law when it comes to BTC or other crypto transactions.
In the end, I believe here, it will be the sender of the money that will be held responsible for doing transfers to counterparties aiming to do illegal activities. Big problem, as the sender of cryptocurrency only has a meaningless address available, and can’t derive the identity of the counterparty wallet holder from it.
Now, there IS a solution to it, both for enforcing the law on sending money and on spreading information, and it’s all about disabling people to work in full anonymity but only allow to deploy activity in a workload if their identity is known.
In a centralized world, this identity is revealed through a KYC process resulting in a ‘certification’ that a person is a) existing in the population register of some country and b) not on a negative hitlist of criminals. This KYC has however a decentralized equivalent, it’s called self-sovereign identity (SSI). I already posted a very nice explanation video about this by Christopher Allen, author of this concept, see video https://www.youtube.com/watch?v=JzM_Brpk95E&feature=youtu.be
It creates an identity, issued by an authority (a government, a university, … basically anyone that can issue an authentic certificate about someone else). The proof of the characteristics of a person is registered on a public blockchain as a ‘verifiable credentials’.
Now, to come back to the topic here: if an identity that cannot be tampered with can be associated with, let’s say, a 3Bot/Digital Twin ID, it will a) keep persons away from spreading fake news or sow hatred, as they can’t operate in anonymity anymore, and b) systems can be invented where a legal representative in someone’s jurisdiction (and thus NOT Facebook, Twitter or Parler) judges manually or through an algorithm about the content made by one of its citizens. It also stops the power of private companies in this matter, as a judgment should be made based on representatives of the law, not by tech companies.
If this self-sovereign identity layer gets incorporated into the internet architecture (and organisations such as ESSIF within EU, similar initiative also ongoing in Canada), I believe it will be a major blocker for cybercriminality (such as funding criminal organisations but also spreading fake news, online extortion etc. ). When this is in place, a sender of cryptocurrency could decide not to make transfers if the counterparty has no ‘verifiable credential’ in place, proving that he is who he claims he is.
So there is a solution, but there is yet a lot of IT to be built, and the lawmakers themselves that need to take action.
In the meantime, however, technical implementation can already be integrated on the TFGrid.