Verify configuration for public IPs

Let’s do 2914 or 3049, they have the most normal setup

Okay, I’ll do it now.

Fingers crossed, I’m on duty for a couple days, but i had a patient drop off near the house so swung by and added the second switch quickly lol

1 Like

Hi - sorry to come back with a no: it seems that somehow a type has happened in the Farm definition:

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

grid_network.proxy02: Creating...
grid_network.proxy02: Still creating... [10s elapsed]
grid_network.proxy02: Still creating... [20s elapsed]
grid_network.proxy02: Creation complete after 29s [id=cf94a9dc-f474-4a23-a9df-2b339f78efc2]
grid_deployment.p1: Creating...
grid_deployment.p1: Still creating... [10s elapsed]
grid_deployment.p1: Still creating... [20s elapsed]
╷
│ Error: error waiting deployment: workload 0 failed within deployment 3673 with error found a malformed gateway address in farm object '108.424.38.190'
│
│   with grid_deployment.p1,
│   on main.tf line 21, in resource "grid_deployment" "p1":
│   21: resource "grid_deployment" "p1" {

The second digit of the gateway address is specified as 108.424.38.190, while your screenshot shows 108.242.38.190

lol atleast it’s an easy fix I can do from here, will update here shortly

So very odd, The explorer and the public config section on portal were both showing the correct configurations for the gateway address. But i have resubmitted the node public configs on 2914 and 3049.

interestingly i still got a node public config failed when trying to update through the portal and had to use the Polkadot UI

edit: actually i thought it would be the node public config, but it was actually the farm public ip config that was incorrect and has also been corrected,

Should i be using diffrent IPs for the public ips section under farms then the ones i assign to each individual node?

@ParkerS. Everything seems to be up and running :clap:

ssh root@108.242.38.186
The authenticity of host '108.242.38.186 (108.242.38.186)' can't be established.
ED25519 key fingerprint is SHA256:GZduV+dbPnaRHqRwNzoO1JRVXS5eH+E7ZKeTcfzLrX8.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:31: 301:d12e:3351:4208:9a7c:cf44:d2d4:f788
    ~/.ssh/known_hosts:33: 301:d12e:3351:4208:6243:5c48:b899:8932
    ~/.ssh/known_hosts:34: 300:c27b:4744:4670:a7fa:6340:2345:3681
    ~/.ssh/known_hosts:35: 302:302f:4555:2f55:2bc5:684a:791a:83ae
    ~/.ssh/known_hosts:41: 302:302f:4555:2f55:fa62:8eeb:ee60:dd4f
    ~/.ssh/known_hosts:42: 302:302f:4555:2f55:1a88:4cf2:350f:7594
    ~/.ssh/known_hosts:43: 301:8d5d:7e4e:ad37:ebe1:5c6a:6308:1d
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '108.242.38.186' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.12.9 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Fri Jun 17 10:22:38 2022 from 203:e36c:484f:d222:1fe5:4917:ea8a:edfe
root@proxy1:~#
1 Like

Suhweet. Thanks for checking it out

1 Like

There are two public IP functions. They can either be assigned to a farm or to individual nodes. Actually both functions are available within the portal now, so no reason to use Polkadot UI. To assign to a node, scroll down to the nodes list in the portal and select the globe icon next to a node to give a public config.

IPs assigned to a farm are available to be rented by workloads, they can be assigned to VMs for example. IPs assigned to nodes enable the node to become a gateway. I think these functions should be mutually exclusive, but in practice farmers who are offering blocks of public IPs tend to have them assigned to their nodes as well.

1 Like

So i should be configured with one set of ip’s assigned to the nodes directly then the remainder that are unused should be added to the farm for potential rental correct?

hey @scott, thanks for the explanation.

I was able to assign public IP blocks to a farm via portal. I can then deploy a VM with a public IP via play.grid.tf on a specific node and one of the IPs is assigned and I can ping the host. So far so good.
But I couldn’t get it done to assign a public IP to a node directly via portal. I always get a failure that the settings could not be saved. But when I use the Polkadot UI it works and the settings are shown correctly in the portal when I click the globe icon next to the specific node. However when I try to ping the node with the public IP directly assigned it fails. Is this behavier to be expected? Maybe the gateway doesn’t respond on pings on purpose?

Also, I would like to know which of the two ways to provide public IPs on performant nodes running in datacenter should be used?

This is a good question (speaking also to @Dany’s similar question). I’d say it’s ideal for a farm to have at least one node with a public IP assigned directly to it. Our testnet FreeFarm is setup this way, also with a subdomain assigned to that IP. That node is able to act as the entry point for Wireguard networks and web gateways, and it will earn TFT via NUs for traffic over these connections (not yet implemented). The remained of the IPs can be left available for workloads and also generate farming rewards when they are rented.

I was able to reproduce this and found there’s already an open issue for our devs to investigate.

A node should repond to ping on it’s public IP. I’m not 100% sure, but you may need to reboot for this to take effect.

wow… didn’t expect that a node needs to be rebooted for this… but that’s it! After rebooting the node I can ping the IP.

As already described I had to assign a public IP manually via Polkadot UI because it was not possible to set the (same) configs via TF portal. Once pub IP configuration is assigned (via Polkadot UI) the settings are shown in the pop-up window when clicking on the globe icon in TF portal and are also displayed on the nodes zos screen under network section (PUB). Bevor I rebooted the node the public IP was assigned as an additional IP-adress on the same NIC port where LAN is connected to (on zos screen under network section called “ZOS”). Of course this can’t work properly since LAN-side router settings are not configured for routing of public IPs. After a reboot the IP is now assigned to a second NIC port and it seems to work fine.

However… there is still some weird issue. a coupIe of days ago I tried to delete public IP settings for that node. Once again this was not possible via TF portal, so I used the Polkadot UI instead and did overwrite the settings with “0x” in each line. After that the TF portal shows no configuration when clicking on the globe icon. But (!!!)… the node still dispalys the public IP on it’s screen. after rebooting the node it has now assigned the pub IP like described above. Now I have a node with a public IP but in TF portal it says there is none configured for that node. So… is it working now or not?

At the moment it’s hard to find a proper documentation on that public IP thing. I think that the informations for this topic in forum, github or elsewhere online need to be improved. Well… looks like we are getting closer, but I still don’t know if the gateway configs are working like this… and I don’t get a picture yet on how gateways work in generall and what they are supposed to do so we can estimate whether or not to build more or assign IPs to farms.

Thank you for you contribution @Dany. Let’s start to improve on the documentation and share out experiences there. There is such a lot of things that need attention that sometimes the basics are forgotten.

Hello together,

I saw this discussion here and I have similar questions in scope of my Node Setup on a datacenter.

I am going to place two of my DIY nodes this week there in a datacenter.
I thought maybe you guys can help me directly.

These are my nodes, from Farm190:

  • 485 (16 VCores; reserved for L0 Validator)
  • 895 (64 VCores)

I have already received my Network configuration.

My network configuration will be:
——————————————————
Netzwerk: 85.88.16.216/29
Netzmaske: 255.255.255.248
Gateway: 85.88.16.217

IPs for the servers:
85.88.16.218 – 85.88.16.222

Nameserver :
85.88.0.92
85.88.1.92
——————————————————-

And now my questions:

  • How is the appropriate configuration for these 2 nodes and where I need to do it. TFT Chain portal or better polkadot UI? I understand that TFT Chain should be sufficient.
  • Is it enough just to set the IPs for both nodes directly with the „set the public Ip“ option for the specific node, without assigning the IPs to the farm?

For example:
Node 485: IP: 85.88.16.218/32 + Gateway: 85.88.16.217
Node 895: IP: 85.88.16.220/32 + Gateway: 85.88.16.217
(maybe I am wrong here)

  • Only node 485 is currently online at home because I am transferring them to an other location. Do I need to get them shortly online in order that they will takeover the configuration before placing them in the datacenter? I read that reset shall be done after the configuration is done (only one nodes has 2 NICs).
  • Would it make sense to have both nodes in one Farm without other DIY nodes that will not work in the datacenter.

Hope to get a hint, how to do my configuration in best way.

Regards,
Martin

It seems that’s not possible at the moment. See…

So you can’t assign a public IP directly via TF Portal for now and configure the nodes as TF gateways. However I was able to configure this via Polkadot UI. But I’m not sure if it’s working properly (also I can ping that host). Still waiting for reply on my questions from dev team.

You can assign public IPs to the farm without assigning them directly to a specific node. In this case your farm would offer these public IPs but will only be used when users order one of them for their workload on your node. You can check this by yourself on TF Playground (grid.tf).

But watch out… you have the wrong netmask in your example!

should be 85.88.16.218/29

Thanks Dany,

Well, looks like it will take me more time. I think I need to go few steps back. I have tried to do it with the Polkadot UI. Here I can’t even find the “tfgridModule” extrinsic in the selection field.

@Martin,

I can’t recommend to adjust any settings in Polkadot UI as long as we (farmers) don’t know what’s exactly going on there. I was playing around with Polkadot UI and however I managed to assign a public IP on a second NIC port in order to test routing. But (!!) I can’t confirm that these settings I manually changed in Polkadot UI result in a proper configuration in the way the TF grid expects/needs it. F. e. the second NIC port seems to be set to a fixed public IP, but in Grid Explorer the node is not shown as gateway. After I tried to delete the settings in Polkadot UI the public IP remains on that NIC port although the TF Portal shows that there are no configurations. That looks like some weird behavier to me.

I think we should wait until the issue in TF Portal is fixed so the configurations can be set there. In the meantime you can assign a public IP pool to your farm (not a specific node). I can confirm that this is working properly.

Hey @Martin,

have you moved in? Did you work it out?

Polka UI feels a little adventurous, but it’s actually pretty safe to play with :slight_smile:

Gateway nodes are those with a domain assigned to them. Nodes that have a public IP but no domain can serve as Wireguard access points, but the gateway functionality, which provides public access to workloads, depends on handing out subdomains.

It might require a reboot for a node to lose it’s public config. I know that nodes try to apply incoming changes, but I don’t know how this applies to clearing the config entirely. I’ll make a note to play with this when I have a chance to see if I observe the same behavior.

2 Likes