Cockpit Managed Ubuntu VM


9/27 update

wanted to get the break down of how i got this up and running out there. this is an amazing tool to combine with your ubuntu 22.04 full vm and it does work with planetary addresses. unfortunately I cant get the 22.04 vm to deploy with only a planetary address

----from the ssh console of your freshly booted ubuntu 22.04 vm you will need to,

apt update
apt install Network-Manager
cd /etc/netplan
ls -l

edit the netplan with Nano netplan file name
delete “version 2” from bottom
change
networks:
ethernets:
to
networks:
version: 2
renderer: NetworkManger
ethernets:
Cntrl X to save confirm write with “y”

systemctl disable Systemd.Networkd $$ systemctl enable NetworkManager
netplan apply
apt -upgrade
apt install cockpit
apt install cockpit-machine
apt install cockpit-podman
apt install cockpit-sosreports
apt install cockpit-pcp
apt install cockpit-tests
apt install firewalld
systemctl start cockpit
passwd root
GUI interface now accessible and fully functional at the public and planetary ip.
login requires password, login must match ssh key to access sudo

if you start firewalld it will remove the access to the web console until you have added 9090 to the correct zone and added cockpit as an allowed service

video setup tutorial to follow.

@weynandkuijpers you have a log in to my running example in your pms, im hoping we can get this into an flist, i have tried and failed miserably

4 Likes

Note: after production
avoid planetary, need to do more work there, what i thought fixed caused some other problems, trying to figure out how all the overlay bridges work to get traffic ending up in the right places

some additional notes,

You can use
“firewall-offline-cmd” to configure your services and ports prior to using to enabling firewalld so that you dont drop all your access and kill your vm when you enable it

I have discovered a high priority use case for this enviroment,

Because it integrates QEMU and the ubuntu environment at its base, it has all the appropriate tools to use a image running this setup for production of further modified images to be deployed on gird.

  • with the way the project is setup we could create an unofficial cockpit plugin that manages that process nearly autonomously and it would be installable through the host terminal once deployed.

here i am running the qemu instace of my dev box for making this into a flist. alongside i have ubuntu and manjaro desktop instaces running for work enviroments. they all exsist within the deployments provate vm network so i am able to access the web console and ssh to the cockpit instances from the running vms.

  • i haven’t figured it out yet but you can setup port forwarding to allow RDP access to your work environments on a different port at the public/planetary address of your deployment.


here the integration of the firewalld service has allowed me to have much deeper control of access to the server.

having fixed the network interfaces interaction with system updates i was able to bring the NFS mounts plugin online that you can use with cloud file storage providers. allowing you to have computer power on grid and storage of grid if desired.

one of the best features, native access to the host terminal right in the browser.

1 Like

Working flist,

If it times out on deploy give it a couple minutes and try it again,

https://hub.grid.tf/parkers.3bot/cockpit2004RC1.flist

You will need to ssh in and set a root password to access the 9090 web interface it will require both your password and public key.

You can do the above posted strategy to migrate to NetworkManager post boot and your system updates tab will work

Awesome. Tested and works like a charm! :ok_hand:

This one was mostly a proof of concept, planning RC2 for Monday/Tuesday, gonna get it so I that it deploys with the firewall up(with rules) and hopefully with the update fix in place

22.1 will boot on grid so gonna try to get that image up all set too

Really cool fact, that 20.04 image was made entirely within my cockpit deployment on 22.04, the only time it left the node was to go to the hub. Every tool is present

few updates, have now sucessfully setup nginx alongside cockpit, am running on 22.1 with 22.1 desktop vm for remote work.

some pictures

.

the tools arent ready for use at this address, i just cloned teis’s page for testing id continue to us publicuptime.tfcloud.us/uptimeCheck for now until i can get @teisie on the server to setup the page properly i just got all the things working.

you are able to use the iptable to forward ports from the public address, to the vms running one the server, so theoretically you could rdp direct to the kinetic desktop. pretty neat.

3 Likes

Everything needed to get started






adding some documentation on where to find relevant tutorials on setup of independent tools for the ubuntu vm
steamcmd setup


Nginx Setup

Nginx SSl with Certbot

Cockpit SSL with Lets Encrypt

Apache Web Server

Apache Web server SSL with Lets Encrypt
https://linuxhint.com/secure-apache-lets-encrypt-ubuntu/

The project Itself’s documentation

FLIST UPDATE

this flist deploys with the cockpit interface running on 22.1 kinetic kudu aswell as an nginx web server running. it is equipped with the qemu hypervisor and will be the launchpad for my making a custom flist video. the update and network interface fix still must be applied post deployment.

if you get “vm failed to be ready after 5 minutes” on your first deployment wait about 10 minutes and re attempt the image is 2.6G so it may timeout while downloading depending on the node.

this was the wromng flist and i deleted the correct one, fixing it

in order to use it just deploy it as a fullvm on test/dev net or on a micro vm with a drive attached on main net. you will need to ssh into the node and set a root password to login to the cockpit interface