Newbie question on Networking when using dynamic v4 IP [Resolved]

Dears,
thanks in advance for reviewing this question.

I’m 5 months old in the TF ecosystem, so lots to learn. Also, I suspect this question has been discussed somewhere, but I couldn’t find other mentions, hence this post …

I run a dedicated bare-metal server (16 CRU, 94 GB RAM, 932 GB SSD, wired connection on 500/500 mbit/s fiber with dynamic but publicly accessible IP v4).

What puzzles me is that on my ZOS dashboard, I can read "PUB - no public config"

and also "IPv4 - Total 0 - Reserved 0" under the “System Resources” section of the dashboard.

My suspicion is that something is wrong with my network configuration for this server.
Moreover, rewards-wise, I’m having (what I believe to be low) monthly rewards. And, on my Node summaries, I can read "0 NU" .

If you had any suggestions I’d be more than happy to read and implement it.

Thanks
Cheers

Hello @dhobiconnect.

It appears we have responded to these questions over the support chat, so I will summarize our discussion here and close this forum thread as resolved.

  • Farming rewards - Proof Of Capacity and Utilization rewards. How they work and when are they dispersed?
    Manual references to Proof Of Capacity and Utilization rewards. Also provided further insight into monthly minting and how 50% of the Proof-of-Utilization rewards are dispersed to the TFChain wallet associated with the farm’s twin.
  • A Public IP was not assigned to the Farm.
    Provided the manual guide to add a public IP to the farm.

Please feel free to re-connect with us over the support chat if you have additional questions.

1 Like

Dears from TF Forum Team,
Thanks for your support and feedback, however from my side the thread is NOT solved yet.

In our chat we mostly spoke about the mining & monthly rewards details, however the network related aspects (as per subject of this post) are still open and unresolved.

More in details:

  • What internal IP should I open as DMZ within my firewall, the “ZOS” or the “DMZ”? Well, I take that the “DMZ” option sounds as the easiest one, but I noticed that both IPs generate more or less the same amount of traffic/time unit, hence my doubt. And if the “DMZ” should be in my internal DMZ, do I need to open/forward any ports for the “ZOS” internal IP?

  • PUBLIC IP: I read the manual but I still have some unsolved questions.

  1. With only one server in my farm, per best practices, should I configure the public IP to the farm or to the server or both?
  2. The “Gateway IPv4” field: should it be filled with what I need to find to be my internet provider gateway dynamically assigned to me/my sub-net? Would it be possible to have a real-life configuration example for the ones of us who are not networks geeks?
  3. The “Domain” field: can I use my dynamicDNS domain here?
  4. The “IPv4” field: should my current public IP be followed by “/24” ?
  5. Currently, I’ve filled the IP config to my server (not the farm’s), BUT I have an error: “registrar: fetch location: failed to fetch location information: registration failed:frowning: Can you please help?
  6. If I wanted to upgrade my internet provider service, should I prioritize a “static IP” or an IPv6 address as upgrade options (currently I don’t have neither of the two) ?

Thanks in advance
Cheers

Hey dhobiconnect,

as I had to handle the public IP issue myself some time ago I guess i can help you out so you dont need to give yourself more headaches. Funny that people are still stumbeling upon that thing after so many years.

Basicly there is absoultly nothing you can or need to do. You simply can not provide the dynamicly assigned public IP because that one is used by your router (And there is no port forwarding or stuff like this). Only if you would have more than one public IP provided by your ISP than you can assign them to you farm and/or a specific node. I guess (because of the dynamic IP) your connection is a simple consumer/small-business line (also 500/500 is pretty good) so a block of pubilc IPs will propably not being offered to you by your ISP.

Also there is no need to open any ports on your router or configure any DMZ. Simple NAT is fine for ZOS. Same with any domain. makes only sense if you bind a (sub)domain to a public IP so the 3node would work as gateway. As you don’t have public IPs you can’t bind them to a domain.

See… you’re good. No need to configure anything special in your router. I guess it would be a good idea to undo all the configs you did so far. Will only cause problems…

Cheers!

Dear Dany,
really appreciated it, thanks!

As mentioned, I’m new to TF (my background is validator nodes on blockchain test-nets), so still trying to figure things out here :wink:

So, I’ve undone it to the previous IP configuration. Now the error is gone :slight_smile:

But something strange happened: now I have a new node number ID ! And my “old” node ID is still shown on the TF Dashboard as ‘up’ even if I only have one server: now it displays 2 Node IDs, my previous one and this new one…
I also took the opportunity of the down time to add another SSD … could that be the cause?
[I kindly ask whether any admins here could have a look into it so I can go back to my old node ID.]

With all the reboots and this thing here with the new node ID I’m pretty sure I’ve now lost this month epoch :frowning:

If I may use one ounce more of your knowledge, is there any guide out there to optimize the hardware of our servers for a more efficient farming?

Again, thanks for your support.

Cheers

Hey @dhobiconnect

Thanks for your inquiries. Yes, changing the SSD can result in a new Node ID. To keep your previous Node ID, copy the node’s identity file to the new SSD and cloning the entire disk is the best method for this.

We do not support interventions that involve copying individual files from node disks. Some farmers have attempted this successfully.

To monitor your node’s downtime, use the Peppermint tool, which tracks downtime for your nodes.

If you need guidance on optimizing your hardware for more efficient farming, check out this guide on Building a 3Node for valuable insights.

Let us know if you need further assistance!

Hey dude!

Yes… that’s the reason for the new node id. But… you said that you added another SSD. Does that mean that the other drive is still in use by that node? If that’s the case you can simply change the connectors to the drives to get back to you old node-id. Looks like you attached the new drive to a port that gets higher priority by ZOS when booting. Just change them and you should be good. Take a look at the Mainboard. There should be indicators marked nearby the SATA ports. Once you found them you will see what I mean.

Cheers

Ratio of 1 CRU to 8 GB RAM is most efficient. Also 3-4 GB of storage space (in total) would be recommended for your node. If you can ad another 32GB of ram

Thank you all.
Situation is clearer now.

So:

  • I’ve removed my (erroneous) server public IP configuration. Error solved

  • I’ve added a new SSD for a total of 2 TB now. As I had some problems in having the new disk recognized by the system, I wiped all 3 of my SSDs, so I definitely caused the new node ID to be created. I didn’t know that I would have caused that, but still… it is as it is now
    –> unfortunately, of 3 SSDs (1TB new, 500 GB, 500 GB), now the system sees 1397 GB “SSD” and 466 GB “HDD”, so it mistakes one SSD for a HDD, despite several reboots. Any suggestions on that?

I’ll try to add another 500 GB SSD and then re-wiping all the disks again… (!). Well, my uptime is over for this epoch so it doesn’t hurt trying, right ? :wink:

  • I currently have 16 CRUs and 96GB RAM (almost 6:1 ratio RAM to CRU) and I can’t expand it with more modules as I filled all my slots, unless I switched to 32GB DIMM modules…

Again, thanks for your kind support.

Hey @dhobiconnect! Thanks for keeping us posted. Yes, wiping the disks will generate a new Node ID. If your SSD is recognized as an HDD, rebooting the 3Node usually resolves the issue.

For those who frequently experience Z-OS misidentifying an SSD as an HDD, please refer to this guide here.

Please note that DIY nodes require 95% uptime, allowing for a maximum downtime of 36.54 hours per month (36 hours and 32.4 minutes). For more details, check here.

It was a pleasure assisting you. Feel free to reach out with any further questions.

1 Like