I have two NUCs with an M.2 disk and a SATA disk. I would like to make an Open Nebula cluster out of it keeping the SATA disks in sync using DRBD9 and ZFS. I want the Open Nebula front-end as a VM running on that cluster.
I created an Ubuntu 18.04 LTS VM using virt-manager on my Linux laptop. Make sure it has a minimum of 2GB memory. Start with an 8GB disk (makes transfer to NUC in the end faster) and expand when deploying on the NUC. Make sure OpenSSH is installed and running.
Open Nebula 5.10 is the latest as of writing. MariaDB 10.4 is the latest as of writing.
apt purge snapd ufw apt install chrony systemctl enable chrony
add-apt-repository ppa:linbit/linbit-drbd9-stack
wget -q -O- https://downloads.opennebula.org/repo/repo.key | apt-key add - echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/18.04 stable opennebula" | tee /etc/apt/sources.list.d/opennebula.list
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8 echo "deb [arch=arm64,amd64,ppc64el] http://mariadb.mirror.liquidtelecom.com/repo/10.4/ubuntu bionic main" | tee /etc/apt/sources.list.d/mariadb.list
apt update
apt -y install mariadb-server mariadb-client mysql_secure_installation mysql -u root -p CREATE DATABASE opennebula; GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'secretpassword'; FLUSH PRIVILEGES;
apt update apt -y install opennebula opennebula-sunstone opennebula-gate opennebula-flow
!! DO NOT RUN THE SUGGESTED /usr/share/one/install_gems
- DB = [ BACKEND = "sqlite" ] + #DB = [ BACKEND = "sqlite" ] + DB = [ backend = "mysql", + server = "localhost", + port = 0, + user = "oneadmin", + passwd = "secretpassword", + db_name = "opennebula" ]
su - oneadmin echo "oneadmin:userpassword" > ~/.one/one_auth
systemctl start opennebula opennebula-sunstone systemctl enable opennebula opennebula-sunstone
Check if it is working
su - oneadmin -c "oneuser show" USER 0 INFORMATION ID : 0 NAME : oneadmin GROUP : oneadmin PASSWORD : ***** AUTH_DRIVER : core ENABLED : Yes TOKENS USER TEMPLATE TOKEN_PASSWORD="******" VMS USAGE & QUOTAS VMS USAGE & QUOTAS - RUNNING DATASTORE USAGE & QUOTAS NETWORK USAGE & QUOTAS IMAGE USAGE & QUOTAS
When you see something like above, you can try an login to the web interface: http://<YOUR_IP_HERE>:9869
apt update apt install linstor-controller linstor-client
systemctl enable linstor-controller systemctl start linstor-controller
For now we leave the controller as is. We need to install the back-ends first.
Make sure you follow this chapter on every back-end node you want to use. This is going to be a physical installation on the NUCs in my case. Again I am using Ubuntu 18.04 because of the availability of a DRBD9 repository and ZFS! Make sure you do a minimal Ubuntu install with just OpenSSH.
apt purge snapd ufw apt install chrony systemctl enable chrony
add-apt-repository ppa:linbit/linbit-drbd9-stack
wget -q -O- https://downloads.opennebula.org/repo/repo.key | apt-key add - echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/18.04 stable opennebula" | tee /etc/apt/sources.list.d/opennebula.list
apt update
I am using a USB-C gigabit ethernet controller and udev creates a network device name with the MAC address of the device. This is not what I want. I just want it to have a generic name. In this case I have chosen eth0.
cp /lib/udev/rules.d/73-usb-net-by-mac.rules /etc/udev/rules.d
Edit the copied file
- IMPORT{builtin}="net_id", NAME="$env{ID_NET_NAME_MAC}"
+ IMPORT{builtin}="net_id", NAME="eth0"
apt install opennebula-node apt install qemu-utils virt-manager apt install zfsutils-linux
systemctl restart libvirtd systemctl restart libvirt-bin systemctl enable libvirtd
In my case the sata disk is /dev/sda. I am not going to create a partition table on it.
zpool create -f data /dev/sda
zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 928G 24.2G 904G - 2% 2% 1.00x ONLINE -
- #DNSStubListener=yes + DNSStubListener=no
cd /etc rm resolv.conf ln -s ../run/systemd/resolve/resolv.conf
I do not want to mount the ZFS filesystem. Just using it for zvols with DRBD9.
- ZFS_MOUNT='yes' + ZFS_MOUNT='no' - ZFS_UNMOUNT='yes' + ZFS_UNMOUNT='no'
We need to do a reboot for the network interface name change
shutdown -r now
I want to use a bridge interface for management. Makes it easy in Open Nebula to configure bridge as VM network interface. Use as you like
network:
version: 2
renderer: networkd
ethernets:
eno1:
accept-ra: no
dhcp4: no
dhcp6: no
eth0:
accept-ra: no
dhcp4: no
dhcp6: no
bridges:
mgmt:
accept-ra: no
addresses:
- 172.16.2.x/24
- 2001:x:x:x::x/64
gateway4: 172.16.2.254
gateway6: 2001:x:x:x::254
nameservers:
addresses: [ "172.16.2.y", "208.67.222.222" ]
search: [ mgmt.heitmann.nl ]
interfaces: [eno1]
apt install drbd-dkms linstor-satellite drbd-utils systemctl start linstor-satellite systemctl enable linstor-satellite
In this case the front-end VM is the controller. So these commands add the back-end nodes to be managed by the controller. Make sure all the back-end nodes are installed.
linstor node create server1 172.16.2.x linstor node create server2 172.16.2.y linstor storage-pool create zfsthin server1 data data linstor storage-pool create zfsthin server2 data data linstor storage-pool list ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ SupportsSnapshots ┊ State ┊ ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡ ┊ DfltDisklessStorPool ┊ server1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ DfltDisklessStorPool ┊ server2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊ data ┊ server1 ┊ ZFS_THIN ┊ data ┊ 874.84 GiB ┊ 928 GiB ┊ True ┊ Ok ┊ ┊ data ┊ server2 ┊ ZFS_THIN ┊ data ┊ 874.84 GiB ┊ 928 GiB ┊ True ┊ Ok ┊ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
apt install jq
Do not try to do a git clone and symlink. It does not work. There are hard coded paths in this addon.
curl -L https://github.com/OpenNebula/addon-linstor_un/archive/master.tar.gz | tar -xzvf - -C /tmp mv /tmp/addon-linstor_un-master/vmm/kvm/* /var/lib/one/remotes/vmm/kvm/ mkdir -p /var/lib/one/remotes/etc/datastore/linstor_un mv /tmp/addon-linstor_un-master/datastore/linstor_un/linstor_un.conf /var/lib/one/remotes/etc/datastore/linstor_un/linstor_un.conf mv /tmp/addon-linstor_un-master/datastore/linstor_un /var/lib/one/remotes/datastore/linstor_un mv /tmp/addon-linstor_un-master/tm/linstor_un /var/lib/one/remotes/tm/linstor_un rm -rf /tmp/addon-linstor_un-master chown -R oneadmin. /var/lib/one/remotes
Edit the Open Nebula config to enable the addon
.
.
VM_MAD = [
NAME = "kvm",
- ARGUMENTS = "-t 15 -r 0 kvm",
+ ARGUMENTS = "-t 15 -r 0 kvm -l save=save_linstor_un,restore=restore_linstor_un",
]
.
.
TM_MAD = [
EXECUTABLE = "one_tm",
- ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
+ ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt,linstor_un"
]
.
.
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
- ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,ceph,fs_lvm,qcow2,vcenter"
+ ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,linstor_un -s shared,ssh,ceph,fs_lvm,qcow2,vcenter,linstor_un"
]
Add new TM_MAD_CONF section:
.
.
TM_MAD_CONF = [
NAME = "linstor_un", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "yes",
DS_MIGRATE = "YES", DRIVER = "raw", ALLOW_ORPHANS="yes"
]
Add new DS_MAD_CONF section:
.
.
DS_MAD_CONF = [
NAME = "linstor_un", PERSISTENT_ONLY = "NO",
MARKETPLACE_ACTIONS = "export"
]
Enable snapshotting (only works on powered off VMs)
-LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-ceph" +LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-ceph kvm-linstor_un"
systemctl restart opennebula opennebula-sunstone
su - oneadmin
cat > system-ds.conf <<EOT NAME="linstor-system" TYPE="SYSTEM_DS" STORAGE_POOL="data" AUTO_PLACE="2" CHECKPOINT_AUTO_PLACE="1" TM_MAD="linstor_un" EOT
cat > images-ds.conf <<EOT NAME="linstor-images" TYPE="IMAGE_DS" STORAGE_POOL="data" AUTO_PLACE="2" DISK_TYPE="BLOCK" DS_MAD="linstor_un" TM_MAD="linstor_un" EOT
onedatastore create system-ds.conf onedatastore create images-ds.conf
su - oneadmin
ssh-keygen cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chown oneadmin:oneadmin ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
cat << EOT > ~/.ssh/config Host * StrictHostKeyChecking no ConnectTimeout 5 UserKnownHostsFile /dev/null EOT chmod 600 ~/.ssh/config
Now we need to copy it to all nodes. Set a password on the oneadmin user on all nodes first. When done you can disable the password on the back-end nodes by using the following command on all back-end nodes: passwd -d oneadmin
scp -r /var/lib/one/.ssh/ oneadmin@YOURHOST:/var/lib/one/
Check if you can login to all nodes as the oneadmin user. And do a reboot on the front-end node to make sure.
shutdown -r now
I created the VM on a laptop and now I want to move it to a back-end node. In this case I move it to server1 and do not want to see it in Open Nebula.
Shutdown the VM and copy the qcow2 file to /var/lib/libvirt/images on the back-end server.
Create a VM with virt-manager using the import exsisting volume. Make sure you connect it to the mgmt bridge.
Enable boot on startup and reboot your back-end server. The VM should start automatically. Check with virt-manager on that server.