Yggdrasil nodes

With the way that ygg addresses, is it beneficial to the ecosystem to host a “public node” to help with forwarding?

For instance in currently have a public node running on the same network as my nodes, that node Is Peered with nodes geographically covering most of the United States and Europe.

Theoretically if I were to peer it with the primary node that the tft team is using would that allow the current nodes that are stateside Connect to my node then only make 1 hop back to the tft Yggdrasil node?

I guess an important question is whether or not there is a tft Yggdrasil node and is the information for me to setup peering to it available?

This may also become a project of mine to try my first deployment on my farm, I could use one of the public subnets to deploy a node on threefold. If I did so would the community be interested in having a private peer available hosted on a node? Would be a reliable low latency connection and bring some utility

2 Likes

This is a great question. I know that the Alpha 5 release includes some improvements in how Yggdrasil peers are selected. I did some reading in the Zos source code around these changes, and what I’m understanding is that 3Nodes with public IPs will peer with each other. There’s also some selection logic for selecting other peers from the lists maintained by Yddgrasil. So, simply running 3Nodes with public IPs may be enough to contribute in this way. Running a separate Yggdrasil node could have some added benefit, if it’s part of the public lists.

1 Like

Well functionally I can say that my ygg node is assigning prefixes to my 3 nodes so it is auto peered with them.

If you have ever heard of the project what the fast, it’s a dynamic routing subscription meant to lower latency. But from my extremely basic understanding of what we are building, the three fold net could function very similarly but with far more capability.

With the struggles of modern networks the prospect of being able to route traffic on a global network where there is access to geographical location, link speed, current load and status of every node end to end could seriously bring more efficiency to the net. Even just by masking node publicly available and attaching it to my network anyone using Yggdrasil could have their traffic forwarded over threefold from my us node and It exit the system.

I mean it’s a basic analogy but this could basically be the “live traffic updates for gps” of the internet. I’ve heard it discussed elsewhere but integration with something like helium could create a entirely new system of routing for fixed and mobile devices with end to end encryption, no ip overcrowding, and a end to the “does anyone here know x.x.x.x” process taken by routing now.

1 Like

Well said and also, not everyone thinks about this every day buy the way the network part of the internet is based on “INTER connecting NETworks”. And these network provides decide (centralised) how traffic flows (or not). This has been a big help (and necessity) at the beginning of the internet, but by now traffic flows should not be determined by a handful of companies (eg. people). I have worked for more than a decade for such an organisation and I can tell from experience that there were only a handful of “network” nerds that decided whether traffic would flow eastbound around their global network, or westbound. Even worse, the big tech companies in social platforms and clouds are now connecting the underserved around the world by investing in longer sea-cables connecting these people to their central datacenters. The issue is growing, not shrinking.

The planetary network, yggdrasil technology is putting this power (network traffic routing) back into the hands of everyone as @ParkerS have shown. :clap: Love it!

2 Likes

I would love to know more about how peering is being configured,

there is also fantastic opportunity within the deployment of titans for right now utility.

If someone has no other devices on their network that require port forwarding, it would be possible to setup the titan as a dmz from the home router which would allow it to function as a server with the single public IP address of the house.

This would require some configuration changes to the os, the way things are now would create an ip conflict on the network, but I think it’s very doable, if there was an option to add a separate public config that just advertised the public ip without adding the interface. Today this could put 1000+ ip addresses up for grabs on the block chain and no devices needing ports is honestly the norm in a home environment.

Beyond this, with the available information the ygg peers could be blocked into geographical sub groups with the highest bandwidth node in the sub section being the node and the rest peers. This would benefit both network and potential network customers because when someone accesses that public ip their traffic will be carried through ygg to the closet exit back to them. The potential for elimination of packet loss could be significant

This could also present an opportunity for low cost entry into the network, a low resource/low power device that you plug in and set as a dmz, it runs a small container that handles forwarding and allows your ip to be used through the net.

2 Likes

project note for the purpose of anyone with contributions

Currently assessing the possibility of a single node with a public IP Address being able to address multiple private nodes for public access using haproxy and ygg

my thought process is, If a public node is configured as a direct peer to a private node the private node should be able to receive external traffic from the public peer over the YGG Interface. This could offer a pretty unique capability.

If for example we were to deploy a singe core VM on node 2914 and Deploy HAPROXY within that vm we could set it to monitor the public ip address and forward that traffic through the ygg interface to the private node and because we have an open peer connection, it will circumvent NAT. the other option would be to make ygg listen on a specific port and direct port forwarding to the node in use but, the direct peering could be auto initiated without user configuration.

it was mentioned early in the thread that most ISPs do not offer public blocks, this could offer a node with the ability to get public IP addresses to offer them to someone renting any farm on the net (probably should be geographically similar)

This could theoretically be broken out to the point of being able to rent a specific port on a public IP. for instance lets say i want a public IP address to run a helium validator node on a deployed node. haproxy listens for traffic on that port and forwards that specific port to the external address over ygg.

I wonder if this could be used to turn a single node into having the ability to route multiple websites. Theoretical example

User requests
www.smithtacticalsolutions.com that is hosted on node (for example 2937) that has a private IP address

DNS records for smith tactical solutions include a SRV record that directs that traffic to node 2914s address at 108.242.38.185:9130 ,

node 2914 runs the vm that has HAproxy listening on port 2914 and forwards that traffic to node 2937s ygg interface that is a direct peer and is bridged to the vm interface for the host.

I think I have come up with a solution, currently at work and cant test but

map of the proposed solution
2914

with node 2914 running a vm and haproxy configured as

frontend Smithtacticalsolutions.com
bind 108.242.38.185:9110 v4v6
bind 108.242.38.185:9120 v4v6
bind 108.242.38.185:9130 v4v6
bind 108.242.38.185:9140 v4v6
bind 108.242.38.185:9150 v4v6
mode tcp
log global
option http-keep-alive
option forwardfor
option ssl-hello-chk
timeout client 14400000
timeout connect 60000
timeout tunnel 14400000
timeout http-request 14400000
capture request header User-Agent len 64
capture request header Accept-language len 64
capture request header x-forward len 15
capture request header host len 64
capture request header X-Orig-Base len 64
capture request header X-Orig-Host len 64
capture request header X-Orig-Proto len 64
acl is-ssl hdr(X-Orig-Proto) https
acl is-http hdr(X-Orig-Proto) http
acl private1 dst_port 9110
acl private2 dst_port 9120
acl private3 dst_port 9130
acl private4 dst_port 9140
acl private5 dst_port 9150
Use_backend private1 if private1
Use_backend private2 if private2
Use_backend private3 if private3
Use_backend private4 if private4
Use_backend private5 if private5

backend Private1
mode tcp
option http-keep-alive
option http-reuse agressive
option ssl-hello-chk
retries 3
timeout connect 3s
timeout server 15s
timeout queue 60s
timeout check 10s
timeout http-request 15s
timeout http-keep-alive 15s
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
Server (private 1 ygg address) Check

backend Private2
mode tcp
option http-keep-alive
option http-reuse agressive
option ssl-hello-chk
retries 3
timeout connect 3s
timeout server 15s
timeout queue 60s
timeout check 10s
timeout http-request 15s
timeout http-keep-alive 15s
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
Server (private 2 ygg address) Check

backend Private3
mode tcp
option http-keep-alive
option http-reuse agressive
option ssl-hello-chk
retries 3
timeout connect 3s
timeout server 15s
timeout queue 60s
timeout check 10s
timeout http-request 15s
timeout http-keep-alive 15s
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
Server (private 3 ygg address) Check

backend Private4
mode tcp
option http-keep-alive
option http-reuse agressive
option ssl-hello-chk
retries 3
timeout connect 3s
timeout server 15s
timeout queue 60s
timeout check 10s
timeout http-request 15s
timeout http-keep-alive 15s
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
Server (private 4 ygg address) Check

backend Private5
mode tcp
option http-keep-alive
option http-reuse agressive
option ssl-hello-chk
retries 3
timeout connect 3s
timeout server 15s
timeout queue 60s
timeout check 10s
timeout http-request 15s
timeout http-keep-alive 15s
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
Server (private 5 ygg address) check

I am not a professional and im new to Haproxy and cli interfaces so if i did something stupid in this config let me know. but the end goal im working towards is to be able to direct connections both based on dest port and hostname. the sorting by port would allow a non ssl connection to still route properly whereas the hostname routing would make configuration much simpler. yggdrasil would be configured to peer the server node and the haproxy node directly with eachother.

if this were to be successful a single node could be configured to pass multiple nics through to the VM and an entire public block could be hosted on and distributed to other nodes on the net by that node. essentially turning public access nodes into routers for the entire threefold network with all nodes possible to be “local” devices to the gateway.

1 Like

Wanted to add some documentation on the theory and goal behind my post above so that if anyone has a thought about forks for a better way to accomplish the same goal, my functional goal is this,

Goal: Establish the ability for a home user to host multiple publicly accessible three nodes within the confines of a single ip address.

Relation to mission: truly decentralized internet requires overcoming the challenge of the current real estate shortage for routable space on the internet.

Secondary goals:
Minimal resource usage
Secure package transmissions
Predictable, but actively configurable routes.
100% open source

Significant challenges
Integration into existing technology
Need for maintained zero configuration
Rapid adoptability

With a clear path set and having done a few hours of familiarization I think the answer to this problem is within zero os and tools that we are all familiar with being used in the traditional brick and mortar web hosting. Currently zos can receive external commands without a public address through its Yggdrasil interface, as is today. But, there is a roadblock in that interaction for the public because it would require them to setup the interface on their device and configure. Prompting the need for a configuration that allows a node that does have access to a public ip to support his/her neighbors and provide routing for them. This would effectively be creating a communication layer unique to threefold, that because we have control of both ends of the connection can be much more effective than traditional routing.
I think the answer to this lies within Yggdrasil, creating a subsystem that is able to deploy configurations to Yggdrasil within zos on a private ip node would allow that node to initiate a peer-peer tunnel to a public ip node and use that nodes public ip with the private ip servers resources.
The problem is this would still require someone to go buy a massive number of ip addresses, so how do we fix that, I think there is two options here, channeling a single ip out by port within the threefold network, or using SNI. A switch outside of the threefold network to SNI wouldn’t be realistic because not every device supports it and it could get pretty wonky with conflicts, we don’t have that problem here though, because again we control both end points of the communication in this scenario. Something as simple as each given interface being named sequentially could prevent that when the block chain does the naming.

So client 1 chooses a farm based on available hardware that doesn’t have a public IP, a lower resource farm has one available and has a great connection rating so they add that public IP onto their order. This creates a task that updates the ygg config on both machines adding the two nodes to each others peers list. A vm is deployed on the public ip holding machine and it directs traffic through ygg to named host on the private ip

Mount your waders, were about to get deep, under the current configuration this could also allow a machine to change its ip or fail over a ip from one node to another in the same farm automatically, making a failure transparent outside of the threefold network. Instead of configuring the public ipv4 to an individual node all nodes with access to the same public subnet could be configured so that when a client chooses that ip address the farm queries all nodes for status and assigns that static address to the node with the least current load. If that node goes offline the process then repeats allowing another nodes nic to grab that ip address and redeploy the forwarding automatically.

If there is one thing no other network can currently offer it is abundant, affordable, public address space. By using to Yggdrasil to create a ipv6 only “transport layer” between nodes we can create a publicly accessible network that wont have to compete with other providers because we will be able to offer them something no one else can. With the technology available today we can turn every ip address available on threefold into 10 public addresses, reduce worldwide network overloads, and implement new technologies across a worldwide deployment on the fly.

2 Likes

I’ve started another thread that will server as a master build log, but over the past few days I was able to successfully deploy an independent device and peer it between the major us public nodes as-well the TFT grid with some creative structuring. its not online now, intend to put it back up in about 2 hours and if all goes will leave it up so that we can determine if bridges deployed along sides 3nodes and peered outside the network may improve transport and i/o performance.

---------------------------is online now--------------------------
took longer then expected had some unexpected network conflicts.

image

the us peer list im using
image

how i got the nodes to interface to form a bridge (2 of 3 share a switch, the exchange is happening on the switch from what i can tell) does not not work on the ones not sharing a switch.

image

1 Like

image

My public node is now in the core, if my understanding is correct, this should be a good thing for planetary.

This is kind of wild according to this my device can see nearly every device using Yggdrasil including those in some of the private nets like famedly. My access times even on my home devices have improved, my home router net that has no Yggdrasil devices on it, is routing to Yggdrasil because it shares a wan interface with my public node who can grab the outbound ipv6 address and reroute OUTSIDE of my protected network, with no config the abilities of this is insane.

Openwrt is lede based, could a package be built that would allow anyone to download it and it be pre configured to link to the planetary network? I mean everything seems to already be there…

Basically repackage Yggdrasil and the Luci interface into a planetary network package pre configured to the threefold peer network. There would be absolutely abundant, in supply hardware with price tags under 100$. Could have literally 1000’s of “network” nodes sharing their bandwidth and routes

Possibly feasible project branch,

3 nodes with established links paired with a helium node and an a wireless AP.

3Node serves as the connection

Helium miner advertises the connection over lorawan

Lorawan enabled “client” AP sees the available 3 node connection and connects, internet available all over the world, fully paid in tft and hnt.

Client gateway could be software based in any device with openwrt, lorawan and wifi,

Lorawan could be integrated into openwrt through existing USB ports using Kmod style firmware.

image

can anyone explain what has happened here?

Hey, I really love the deep dive you’re taking here. I meant to comment earlier and explain how some aspects of networking in the Grid are already working along the lines you’re thinking.

First off, all workloads belong to a private overlay network implemented with Wireguard. A bit like how Yggdrasil works, even nodes with no public access can form Wireguard links. Therefore, only a single workload with a public IP is required to get traffic into the private network.

Furthermore, the network also has “web gateways” that allow a single node with one public IP to receive inbound traffic for different workloads. It does this by allocating subdomains on a domain that the farmer owns and routes towards the publicly accessible node.

I like what you’re saying about automatic fallback/switchover for public IPv4 assignment. It would be very cool if public IPs could do some kind of load balancing among different physical nodes within the same network where that IP is terminated. Then failure of a single node could be tolerated by having a workload hosted on another node serve the request.

Looks a bit different than the last graph you posted, but aside from that, I’m stumped :slight_smile: I’ll take some more time to digest what you’re suggesting about a potential Helium integration too.

1 Like

On the map,

I’m relatively convinced I’ve successfully, but accidentally got my nodes peered with all the other nodes. I attempted to isolate my public node from the 3nodes but ygg seems to have found it’s way still.

I haven’t messed with it in a long while because I’m wanting to move my public node to a box instead of a travel router (which is performing surprisingly well)

But my public node made a significant move to the core of the map when my nodes started sharing the interface, my nodes have had basically solid network traffic lights for a couple weeks now and I’m moving 6-18 gbs a day of day whereas I was using 2-4 before. There’s alot of behavior I can’t explain honestly.

If I take my public node offline it seems to make a pretty big instant effect to the map layout… also why I quit playing with it.

For reference my public node peers with all the other public nodes, so my theory is that it’s completing a loop between the theefold nodes and the public nodes through interface peering.

1 Like

Migrated through public node from the travel router testing environment to a Xeon e5520, 28 gbs of ram, shouldnt have issues with it getting overworked anymore