Verify configuration for public IPs

Hey guys,

we are running a farm in a DC and have now completed the networking configurations for the use of public IPs. Now we are looking for someone to test configs on a particular node. Is there someone out there who can do this? Would be highly appreciated!

Please DM me for information on node, farm and public IP.

Thanks!!

2 Likes

Hi @Dany any, Happy to check. Farm name should be enough for me to find what I need. Looking forward to test!

Server 1983

  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

grid_network.proxy01: Creating...
grid_network.proxy01: Still creating... [10s elapsed]
grid_network.proxy01: Still creating... [20s elapsed]
grid_network.proxy01: Still creating... [30s elapsed]
grid_network.proxy01: Still creating... [40s elapsed]
grid_network.proxy01: Creation complete after 45s [id=6a509d6e-ce42-45b4-b5b8-560d115ef5f2]
grid_deployment.p1: Creating...
grid_deployment.p1: Still creating... [10s elapsed]
grid_deployment.p1: Still creating... [20s elapsed]
grid_deployment.p1: Still creating... [30s elapsed]
grid_deployment.p1: Creation complete after 33s [id=3572]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

public_ip = "178.250.167.69/26"
ygg_ip = "301:5393:bfab:9bfd:35f:140b:5736:7042"
➜  terraform-dany git:(main) ✗ 

And then using the public IP to log in with SSH does not connect. Quick change to the planetary network IP and then ip a:

root@proxy1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether be:6a:b7:55:fb:a0 brd ff:ff:ff:ff:ff:ff
    inet 10.1.2.2/24 brd 10.1.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd63:6a56:3957:2::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::bc6a:b7ff:fe55:fba0/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 1e:40:27:8a:f7:56 brd ff:ff:ff:ff:ff:ff
    inet 178.250.167.69/26 brd 178.250.167.127 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1c40:27ff:fe8a:f756/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether e2:0d:ad:a9:66:e2 brd ff:ff:ff:ff:ff:ff
    inet6 301:5393:bfab:9bfd:35f:140b:5736:7042/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::e00d:adff:fea9:66e2/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:5e:8b:75:d1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

Ip address is assigned to an interface, but from outside the routing is not 100% yet. Again from my laptop:

➜  terraform-dany git:(main) ✗ ping 178.250.167.69
PING 178.250.167.69 (178.250.167.69) 56(84) bytes of data.

And from within the VM pinging 1.1.1.1 from the public interface:

root@proxy1:~# ping -I eth1 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 178.250.167.69 eth1: 56(84) bytes of data.

It does not route properly to the outside world. Is it VM routing or routing in the switch / router?

root@proxy1:~# ip r
default via 178.250.167.65 dev eth1 
10.1.0.0/16 via 10.1.2.1 dev eth0 
10.1.2.0/24 dev eth0 proto kernel scope link src 10.1.2.2 
100.64.0.0/16 via 10.1.2.1 dev eth0 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
178.250.167.64/26 dev eth1 proto kernel scope link src 178.250.167.69 
root@proxy1:~# 

So in conclusion:

  • planetary network ingress and egress works
  • public (IPv4) does not work. My best guess is that there is a routing issue somewhere in the switch / router. If I ping the default gateway:
root@proxy1:~# ip r
default via 178.250.167.65 dev eth1 
10.1.0.0/16 via 10.1.2.1 dev eth0 
10.1.2.0/24 dev eth0 proto kernel scope link src 10.1.2.2 
100.64.0.0/16 via 10.1.2.1 dev eth0 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
178.250.167.64/26 dev eth1 proto kernel scope link src 178.250.167.69 
root@proxy1:~# ping 178.250.167.65
PING 178.250.167.65 (178.250.167.65) 56(84) bytes of data.
From 178.250.167.69 icmp_seq=1 Destination Host Unreachable
From 178.250.167.69 icmp_seq=2 Destination Host Unreachable
From 178.250.167.69 icmp_seq=3 Destination Host Unreachable
From 178.250.167.69 icmp_seq=4 Destination Host Unreachable
From 178.250.167.69 icmp_seq=5 Destination Host Unreachable
From 178.250.167.69 icmp_seq=6 Destination Host Unreachable
From 178.250.167.69 icmp_seq=7 Destination Host Unreachable
From 178.250.167.69 icmp_seq=8 Destination Host Unreachable
From 178.250.167.69 icmp_seq=9 Destination Host Unreachable

It does not reply.

ok… I may have have found the routing error. I double checked previous settings on the router where I first tried to bridge two interfaces (WAN - DMZ) to make the public ip range available “behind” the router (for monitoring purpose). Looks like there was a record for the .69 in the ARP table left even though the DMZ interface is inactive and rules are disabled. However it should be fixed now.

Can you please check once again?

1 Like

The 3nodes second NIC ports are connected to a switch which is uplinked directly to the DC. By doing so our firewall/router is bypassed completly. The public IP net is routed to the uplink provided by the DC. I tested this and can confirm correct routing (as far as I can see).

I’ll try again now… Give me 10 mins.

of course! Thank you very much!

I still cannot reach the public IP… sorry.

Default Gateway:

➜  terraform-dany git:(main) ✗ ping  178.250.167.65
PING 178.250.167.65 (178.250.167.65) 56(84) bytes of data.
64 bytes from 178.250.167.65: icmp_seq=1 ttl=242 time=179 ms
64 bytes from 178.250.167.65: icmp_seq=2 ttl=242 time=173 ms
64 bytes from 178.250.167.65: icmp_seq=3 ttl=242 time=123 ms

Check - can get there.

(New) vm deployed and ping / tracepath:

 1?: [LOCALHOST]                      pmtu 1500
 1:  _gateway (192.168.0.1)                               31.778ms asymm  2 
 1:  _gateway (192.168.0.1)                               32.308ms asymm  2 
 2:  5.195.3.81 (5.195.3.81)                             101.101ms 
 3:  94.56.186.5 (94.56.186.5)                            41.899ms 
 4:  86.96.144.36 (86.96.144.36)                          12.336ms 
 5:  86.96.144.18 (86.96.144.18)                         1000.436ms asymm  6 
 6:  195.229.1.76 (195.229.1.76)                         175.342ms 
 7:  195.229.3.175 (195.229.3.175)                       306.793ms 
 8:  AMDGW2.arcor-ip.net (80.249.208.123)                185.910ms 
 9:  de-dus23f-rb01-be-1050.aorta.net (84.116.191.118)   239.203ms asymm 11 
10:  ip-005-147-251-214.um06.pools.vodafone-ip.de (5.147.251.214) 248.896ms asymm 11 
11:  176-215.access.witcom.de (217.19.176.215)           253.562ms asymm 12 
12:  176-219.access.witcom.de (217.19.176.219)           137.063ms asymm 13 
13:  no reply
14:  no reply
15:  no reply
16:  no reply
17:  no reply

And the same output for the VM’s IP:

➜  terraform-dany git:(main) ✗ tracepath -b 178.250.167.69
 1?: [LOCALHOST]                      pmtu 1500
 1:  _gateway (192.168.0.1)                              351.430ms asymm  2 
 1:  _gateway (192.168.0.1)                               40.293ms asymm  2 
 2:  5.195.3.81 (5.195.3.81)                               4.763ms 
 3:  94.56.186.5 (94.56.186.5)                            19.766ms 
 4:  no reply
 5:  86.96.144.18 (86.96.144.18)                         829.805ms asymm  6 
 6:  195.229.1.76 (195.229.1.76)                         359.504ms 
 7:  195.229.3.145 (195.229.3.145)                       143.151ms 
 8:  AMDGW2.arcor-ip.net (80.249.208.123)                211.805ms 
 9:  de-dus23f-rb01-be-1050.aorta.net (84.116.191.118)   188.953ms asymm 11 
10:  ip-005-147-251-214.um06.pools.vodafone-ip.de (5.147.251.214) 400.730ms asymm 11 
11:  176-215.access.witcom.de (217.19.176.215)           159.937ms asymm 12 
12:  176-219.access.witcom.de (217.19.176.219)           217.842ms asymm 13 
13:  no reply
14:  no reply
15:  no reply

So, somewhere in the router traffic stops…

I don’t undestand this. I checked routing one more time and it looks good. I temporary allowed to respond to pings on the routers WAN interface (IP = .68). routing works fine… see below

image

So from DC side the routing from their gateway to our public IP net seems to be set correctly.
For troubleshooting I added a virtual IP (WAN IP Alias) for the .69 and pings to this adress also work fine. Trace route looks like this:

image

I can even redirect (port-forwarding) the .69 to the nodes LAN interface. Trace route shows an additional knot

image.

As I said… there is no firewall or router between the DC gateway and the second NIC port of each node. The second NIC ports are connected directly to a switch where the DC uplink is connected to. The routers WAN interace also goes here and is working perfectly fine. Technicaly the WAN interface is on the same topology level than the second NIC ports. When the routers WAN side is working… public IPs on secondary NIC ports should be working too.

Just to be absolutely sure … can you please confirm the physical setup is rigth:

First NIC port of each 3node is connected to a LAN where it gets an IP and gateway by DHCP-server.
public IP address(es) will be assigned to a second physical NIC port on the node. correct?
Therefore this second NIC port has to be attached to a separate network where the public IP range is routed to, correct? So from my understanding the node has its connection to the grid via LAN and is then said from the grid-side “Hey node…please assign youself this public IP on your secondary NIC port”. correct?

What confuses me is that ZOS still says no public config.

image

I really don’t understand this. Will have to further investigate.

I used the TF Chain Portal to assign public IP adress(es) to the farm.

As described here (https://library.threefold.me/info/manual/#/manual__public_config) public IPs should be assigned in the polkadot UI. When I try to check this in polkadot UI I get the following errors:

image

image

any ideas?

Dany that’s how I configured 2914, 3049 and 3081, I can jump on and try to help? I put a walk through in my multi gig thread aswell.

You may need to add the front end manually in the portal for the confit to apply successfully ,

Public Node Setup for the home user. image

Once the config adds properly it will show a public config on the main screen
image

Hey parker

thanks for the advice. It’s up and running now. It seems like it’s not enough to just offer the public IP to the farm via TF Chain portal. After assigning the public IP to the particular node via polkadot UI it works.

1 Like

Ah yes, I couldn’t do it in the portal either, exciting to see more gateways popping in, I’m working on getting my little home dc setup to handle 64 address block here in the next couple weeks

1 Like

@weynandkuijpers could you please check one more time? It should work now.

Server is pingable!. see…
image

trace route looks good too.

image

1 Like

Hey parkers,

can you give me some details on your cabling? Do you have a second NIC port in use for the public IPs on the nodes? I did some more testing and have now different scenarios configured.

have you ever tested routing to the public IPs assigned to your nodes? I mean not from the inside of you network but from the web?

I’m only using a single nic connection, it seems to be creating virtual interfaces because the nodes are using Mac’s that don’t physically exist in boxes,

I have a primary gateway that has a run to each node that handles all the subnetting, it really was pretty minimal on the configuration side once I realized that the node creates another interface with a static address using the information from the public config you put in on the polka dot Ui.

I’ve never had a test run but I had the impression they were working properly, would be news to me if they weren’t

So I just have my gateway hand out private subnet addresses at boot, then when I added the public config it created the static interface with the address inside of the public subnet range

2 Likes

As i plan to bring some nodes to a local datacenter I’m following this thread. Seems we really could use a guide for dummies (me that is) on how to setup your node in a datacenter correctly. :hugs:

1 Like

theres definitely not alot of info on the gateway side, I have a few threads going with bits and pieces here

bit im workin on some bigger picture more general stuff now that will be more widely applicable as i figure it out myself. im active on discord telegram and here, let me know if i can help!

could you make sure my public ip config is working correctly on 2914,3049 or 3081, dany pointed out ive never tested it and ive been answering lots of questions assuming my setup was correct…

image

Dany, Congrats on being the first German gateway nodes!

theres only gateways in four countries right now and as far as I know, You and I are the only DIY gateways.

5 Likes