Verify configuration for public IPs

Hi @Dany,

All looks good!

➜  terraform-dany git:(main) ✗ ssh root@178.250.167.69
ssh: connect to host 178.250.167.69 port 22: Connection refused
➜  terraform-dany git:(main) ✗ ping 178.250.167.69
PING 178.250.167.69 (178.250.167.69) 56(84) bytes of data.
64 bytes from 178.250.167.69: icmp_seq=1 ttl=51 time=135 ms
64 bytes from 178.250.167.69: icmp_seq=2 ttl=51 time=126 ms
64 bytes from 178.250.167.69: icmp_seq=3 ttl=51 time=137 ms
64 bytes from 178.250.167.69: icmp_seq=4 ttl=51 time=120 ms
64 bytes from 178.250.167.69: icmp_seq=5 ttl=51 time=128 ms
^C
--- 178.250.167.69 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 119.557/129.110/136.627/6.248 ms
➜  terraform-dany git:(main) ✗ ssh root@178.250.167.69   
The authenticity of host '178.250.167.69 (178.250.167.69)' can't be established.
ED25519 key fingerprint is SHA256:GZduV+dbPnaRHqRwNzoO1JRVXS5eH+E7ZKeTcfzLrX8.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:33: 301:d12e:3351:4208:c32b:a725:4b6c:aa03
    ~/.ssh/known_hosts:35: 301:d12e:3351:4208:9a7c:cf44:d2d4:f788
    ~/.ssh/known_hosts:36: 301:d12e:3351:4208:6243:5c48:b899:8932
    ~/.ssh/known_hosts:37: 301:5393:bfab:9bfd:35f:140b:5736:7042
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '178.250.167.69' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.12.9 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@proxy1:~# 

Congratulations! The connection refused in the beginning is because this flist start a few daemons and this takes some time. Last step in theprocess is to start sshd As you can see, it got there and works.

Let’s cable the other nodes as well :rocket:

2 Likes

I’ll test it now… :slight_smile:

Hi @parker, some thing does not sit wright with this setup:

  Enter a value: yes

grid_network.proxy01: Creating...
grid_network.proxy01: Still creating... [10s elapsed]
grid_network.proxy01: Still creating... [20s elapsed]
grid_network.proxy01: Still creating... [30s elapsed]
grid_network.proxy01: Creation complete after 34s [id=637b3ef2-a634-49cd-ac6e-0cdb19f4e987]
grid_deployment.p1: Creating...
grid_deployment.p1: Still creating... [10s elapsed]
grid_deployment.p1: Still creating... [20s elapsed]
╷
│ Error: error waiting deployment: workload 0 failed within deployment 3608 with error could not get public ip config: public ip workload is not okay
│ 
│   with grid_deployment.p1,
│   on main.tf line 21, in resource "grid_deployment" "p1":
│   21: resource "grid_deployment" "p1" {
│ 

I used the same terraform deployment script as for @Dany test, just replaced his 1983 node for your 3081. Please have a look at the farm setup in the https://portal.grid.tf in the farm / public ip section (you should have a /something defined there which is not a /32.

@Dany, any more suggestions to check?

Yeah…Great!! Thank you very much @weynandkuijpers.

We will connect the other nodes as soon as possible. I’ll let you know when it’s done!

Thanks again! :+1:

2 Likes

this is my config under the node specifically,

image

this is under the farm
image

I am not a network expert, but know my way around networks (a bit…). The first thing that stands out to me is that you specify to have a /29 which is 8 IP addresses. As far as I remember the convention, usually the lowest (IPv4) address is the network address, the highest IPv4 address is the broadcasrt address of the subnet. Usually the GW is the one address above the network address (in your case 108.242.38.188. You have chosen....190 to be the gateway, right in the middle of the usable range.

I have not ready the code that distributes address but I can imagine that is uses that convention and therefore it might create a conflict. Let me ask one of the network gurus.

Some of that is because though I have an 8 address block Att controls the gateway address, so my network address is 108.242.38.184, the gateways 190 and the usable range is 185-189 and broadcast is 191. So I have 5 usable addresses currently

Does this test require I have additional ips setup beyond the one ip directly associated with the node public config?

Hey Dany I get the vibe your more of business model host with your rack plans, do you plan to be looking for people to have hosting agreements with in the future?

hey parkers, well… with this DC farming project we wanted to find out if this is technically and economically feasilbe and also sustainable. Also we wanted this to be done in a most professional way. That might look like we are going for a business here but for now we’re not particularly looking out for some kind of co-farming services to offer (if thats what you are refering to). But we are open for any kind of cooperations. The grid has to grow further and from my point of view DC farms are essential for the grid. Apart from… since we are now close to finishing this setup… I feel that I’m still hungry :wink:

1 Like

I’ve noticed Flux has some sort of node renting program. I haven’t really looked into it, but it could be a starting point for you to look into, and myself.

So the expert (on holliday) responded: In order to do this the server needs to have 2 nic’s connected. Do you have 2 nic’s connected? If not, add a network cable to the switch and it should work after a reboot.

No suggestions at the moment. I have to dig deeper. I found out that there are differenent ways to assign public IPs (f. e. via TFchain-Portal or via polkadot UI). Also it looks like there are different hardware setups like using only one NIC port which can share LAN and public IP. I set up different szenarios and try to make them work. Will come up with the results asap.

Copy that, I definitely don’t have it setup with the second nic run, Will get that fixed asap.

Yeah… this was really confusing to me! Good to know now that 2 NICs are neccessary.

Let me give you an update on what I found out so far:

When you assign a public IP via polkadot UI to a specific node-ID then zos will assign that public IP as an additional adress on the primary NIC port where the LAN IP is assigned to. This is done no matter if the node has only that one NIC connected or two (or more). What’s crazy is what I experianced when trying to ping the assigned pub IPs. When you try to ping the public IP adress on a node that has only one port connected it will fail. But if you try to ping a node with two connected NIC ports it will respond. but the respond does not come from the second NIC like I was expecting, it comes from the primary port where the LAN IP is also assigned.

Actually I don’t understand why pub IPs can even be assigned via polkadot UI. As far as I understand a public IP pool (or only one IP) can be assigned to a specific farm so the including nodes are offered to use one or more of this IPs. As long as these available IPs aren’t “booked” by someone they won’t be assigned to the node. In this szenario a farmer would not know which particular node will use which specific pub IP. Therefore it shouldn’t be manually fixed in polkadot UI. Am I missing something?

Well… as I said… I have to dig deeper. What I can confirm now, after Waynand was so kind to do the testing on one node, I got it done by using two NIC ports. First NIC port connected to LAN, second one connected directly to WAN (no router/firewall). Public IPs configured in the TF chain Portal only… no configs modified in polkadot UI. After Waynand deployed a VM and contracted a public IP for the VM a public IP was assigned on the second NIC and was usable from outside. After Waynand ended the deployment for the VM and the use of the public IP the second NIC looses the assigned IP and is no longer pingable. I think that’s how it’s supposed to be.

Thank you for finding this out! Will be dropping lot cables and getting the second nics online in the next couple days

double reply because ADHD life, I bring up looking for host because ive found myself with a very affordable connectivity package but i dont have the resources to put it to full use, so if you ever get to the point you would want to invest some equipment in a coop farm, id be open to discussion on providing the space, connection and power.

I don’t think you need a 2nd nic, you should have enough ports already.

Just verbage I mean a second cable to the exsisting additional ports

Hey all three nodes have a second network run now, could you retest?

The farm is beginning to move into the flight space, it’s like a wild vine plant on the walls :joy:image

Hi @ParkerS. Will geton it now. Same node, 3081, okay?