How to restore Presearch node on Titan

I want to transfer my Presearch node to my Titan, but there is an issue when I try to deploy the Presearch Instance.
I started by navigating to Then I created a profile by specifying Profile Name, Mnemonics, and Public SSH key. After the profile is created, I go under Presearch Restore tab and enter private and public restore keys but “Deploy” button is still disabled and there’s no error message displayed.
To troubleshoot, I went to the profile I created and noticed that the Twin ID in my profile is different from Twin ID on my node. I don’t know whether that is the issue or not. Or it has something to do with the SSH key I provided. I didn’t know where to get the SSH public key from, so I generated one on my Macbook.

Has anyone successfully created a Presearch node on his/her farm?

Hey there, and welcome to the forum :wave:

Based on what I read in your description, I think the missing step is node selection. That being said, this workflow probably won’t take your Titan as a valid node for this deployment. Why? Well, each Presearch node needs a dedicated public IPv4. So our agreement with Presearch was that the official solution would only deploy to nodes with available IPv4s to dedicate to the workload.

If you’ve been running a Pre node at home already, you might wonder, what’s the issue? Simply put, we can’t stop anyone else from also running a Pre node on your Titan, which would cause the two nodes to fight for the IP. If you’re willing to accept that risk, there should be a way around this restriction without too much trouble. I’ll be looking into that soon and will share what I find :slight_smile:

1 Like

Thanks Scott!
Sorry if I was not clear about my intention! I want to replace my current Presearch node with an instance on my Titan machine. That’s why I want to restore the private and public key of my current node on the Titan device, then shutting down my existing node before activating the node on my Titan machine.
You suggested that I’m missing the node selection step. Do you know how I should complete the node selection process?

Hi @freshyear. This is very possible. I know @geertmachtelinckx has such a deployment running. Geert, can you please help @freshyear? :slight_smile: Happy farming!


What @scott suggests is that you probably need to activate the filtering to select your node. I attached a screenshot where I used the capacity filter, select a farm name of my choice (in your case this should be your farm name), and then click on ‘Apply Filter and Suggest nodes’. If nodes are then found under the selected farmer, they will pop up in a dropdown box, from where you will be able to select the node of your choice.

Screenshot 2022-03-11 at 07.18.33


@Geert After I enter my farm name and click on “Apply Filters and Suggest Nodes” button, the Node ID list is empty.

Also when I try Node Selection: Manual and then enter my node ID, I get error “Node( 1684 ) might be down or doesn’t have enough resources.”

CleanShot 2022-03-11 at 07.41.55@2x

Ha ok, I think that the weblet has been conceived in a way it requires a public IPv4 address connected to the node. This is a Presearch requirement: only one IPv4 address can be connected to a PRE node, otherwise duplicate IPs will be reported. So far the number of public IP addresses is still limited on TFGrid, there is FreeFarm and there are now a number of GreenEdge ones are available like that.

There IS an option to use your node to host a PRE node on it however, over the planetary network, but it requires some command lines. You set up a VM, launch Docker and then deploy the PRE node afterwards. Be careful though that we currently don’t have something in place that guarantees unique usage of the node and thus IP address for one PRE node. But if you do it yourself, there shouldn’t be an issue.
Info on how to do this, see here. This manual won’t be published in the wiki though, as you don’t have 100% guarantees that it’ll continue to work due to these duplicate node possibility.

1 Like

I see. Thanks for the directions @Geert. The way I see it, I think it’s more effective to keep my PRE node on a Raspberry instead of taking resources on my Titan for a VM just to deploy a docker container on it. I’m hoping in the future there is a better way to host my PRE node on the TFGrid.

In the link that you sent above, there’s a link to setup the VM and that goes to a 404 page. I hope the team can fix that broken link…

I submitted this is another location and believe they are checking it out. I think there may be a bug that doesn’t let you deploy on a node that only has ssd. It seems that the only available nodes have hard drives. Now I’m wondering if that’s just a coincidence or if the lack of a public ip is the real issue I had too.

I was trying to do a docker image (for presearch) install in a VM last night and couldn’t get it yet. It seems to take some extra steps as indicated in the one post above. Wish I saw that last night, could have saved myself some effort

The weblets are open source and a part of the presearch one is here. I am not a developer but if I read the code there is a requirement for a public IP address in the SelectNodeId section.

Yes, public IP is required. Checking the code, you can also see that the deployment includes a disk, which is used to store the Pre node’s keys. It seems that the default and only option for now through weblets is that disks use HDD, making this a second requirement. Whether VMs persist their root filesystem in case the node reboots is unknown to me, but if they don’t, this disk is necessary to persist the Pre node’s keys, in the case they weren’t passed as environment variables through the node recovery tab.

For the record, if anyone wants to experiment with configurations outside of the official deployment, you can still use the flist as a starting place. Select the “Other” option under VM image and use this url:

Also make sure to set the entry point to: /sbin/zinit init

You can also go to environment variables and add your registration code using key: PRESEARCH_REGISTRATION_CODE

What doesn’t work is trying to add your node’s existing keys this way, due to an idiosyncrasy with how backslashed escape sequences get passed through the existing UI in environment variables. You can still do that via ssh or scp over Yggdrasil, in the case you’re choosing not to reserve a public IP.

I’m going to open an issue to see about changing the environment variable handling to make this easier for anyone who wants to try it.

1 Like