User Tools

Site Tools


open_nebula_kvm_with_drbd

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
open_nebula_kvm_with_drbd [2020/02/23 12:45] herwarthopen_nebula_kvm_with_drbd [2020/02/23 13:23] (current) – [Move front-end VM to new cluster] herwarth
Line 139: Line 139:
 systemctl restart libvirt-bin systemctl restart libvirt-bin
 systemctl enable libvirtd systemctl enable libvirtd
 +</code>
 +==== Create ZFS pool on SATA disk ====
 +In my case the sata disk is /dev/sda. I am not going to create a partition table on it.
 +<code>
 +zpool create -f data /dev/sda
 +</code>
 +<code>
 +zpool list
 +NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
 +data   928G  24.2G   904G             2%     2%  1.00x  ONLINE  -
 </code> </code>
 ==== Configure system ==== ==== Configure system ====
 +<code - /etc/systemd/resolved.conf>
 +- #DNSStubListener=yes
 ++ DNSStubListener=no
 +</code>
 +<code>
 +cd /etc
 +rm resolv.conf
 +ln -s ../run/systemd/resolve/resolv.conf
 +</code>
 +I do not want to mount the ZFS filesystem. Just using it for zvols with DRBD9.
 <code - /etc/default/zfs> <code - /etc/default/zfs>
 - ZFS_MOUNT='yes' - ZFS_MOUNT='yes'
Line 178: Line 198:
       interfaces: [eno1]       interfaces: [eno1]
 </code> </code>
 +==== Install DRBD9 ====
 +<code>
 +apt install drbd-dkms linstor-satellite drbd-utils
 +systemctl start linstor-satellite
 +systemctl enable linstor-satellite
 +</code>
 +===== Configure front-end =====
 +====DRBD9====
 +In this case the front-end VM is the controller. So these commands add the back-end nodes to be managed by the controller. Make sure all the back-end nodes are installed.
 +<code>
 +linstor node create server1 172.16.2.x
 +linstor node create server2 172.16.2.y
 +linstor storage-pool create zfsthin server1 data data
 +linstor storage-pool create zfsthin server2 data data
 +linstor storage-pool list
 +╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
 +┊ StoragePool          ┊ Node    ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ SupportsSnapshots ┊ State ┊
 +╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
 +┊ DfltDisklessStorPool ┊ server1 ┊ DISKLESS ┊          ┊              ┊               ┊ False             ┊ Ok    ┊
 +┊ DfltDisklessStorPool ┊ server2 ┊ DISKLESS ┊          ┊              ┊               ┊ False             ┊ Ok    ┊
 +┊ data                 ┊ server1 ┊ ZFS_THIN ┊ data     ┊   874.84 GiB ┊       928 GiB ┊ True              ┊ Ok    ┊
 +┊ data                 ┊ server2 ┊ ZFS_THIN ┊ data     ┊   874.84 GiB ┊       928 GiB ┊ True              ┊ Ok    ┊
 +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
 +</code>
 +====Linstor_un driver for DRBD9====
 +<code>
 +apt install jq
 +</code>
 +Do not try to do a git clone and symlink. It does not work. There are hard coded paths in this addon.
 +<code>
 +curl -L https://github.com/OpenNebula/addon-linstor_un/archive/master.tar.gz | tar -xzvf - -C /tmp
 +mv /tmp/addon-linstor_un-master/vmm/kvm/* /var/lib/one/remotes/vmm/kvm/
 +mkdir -p /var/lib/one/remotes/etc/datastore/linstor_un
 +mv /tmp/addon-linstor_un-master/datastore/linstor_un/linstor_un.conf /var/lib/one/remotes/etc/datastore/linstor_un/linstor_un.conf
 +mv /tmp/addon-linstor_un-master/datastore/linstor_un /var/lib/one/remotes/datastore/linstor_un
 +mv /tmp/addon-linstor_un-master/tm/linstor_un /var/lib/one/remotes/tm/linstor_un
 +rm -rf /tmp/addon-linstor_un-master
 +chown -R oneadmin. /var/lib/one/remotes
 +</code>
 +Edit the Open Nebula config to enable the addon
 +<code - /etc/one/oned.conf>
 +.
 +.
 +VM_MAD = [
 +     NAME           = "kvm",
 +-    ARGUMENTS      = "-t 15 -r 0 kvm",
 ++    ARGUMENTS      = "-t 15 -r 0 kvm -l save=save_linstor_un,restore=restore_linstor_un",
 +]
 +.
 +.
 +TM_MAD = [
 +     EXECUTABLE = "one_tm",
 +-    ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt"
 ++    ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt,linstor_un"
 +]
 +.
 +.
 +DATASTORE_MAD = [
 +     EXECUTABLE = "one_datastore",
 +-    ARGUMENTS  = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,ceph,fs_lvm,qcow2,vcenter"
 ++    ARGUMENTS  = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,linstor_un -s shared,ssh,ceph,fs_lvm,qcow2,vcenter,linstor_un"
 +]
 +</code>
 +Add new TM_MAD_CONF section:
 +<code - /etc/one/oned.conf>
 +.
 +.
 +TM_MAD_CONF = [
 +    NAME = "linstor_un", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "yes",
 +    DS_MIGRATE = "YES", DRIVER = "raw", ALLOW_ORPHANS="yes"
 +]
 +</code>
 +Add new DS_MAD_CONF section:
 +<code - /etc/one/oned.conf>
 +.
 +.
 +DS_MAD_CONF = [
 +    NAME = "linstor_un", PERSISTENT_ONLY = "NO",
 +    MARKETPLACE_ACTIONS = "export"
 +]
 +</code>
 +Enable snapshotting (only works on powered off VMs)
 +<code - /etc/one/vmm_exec/vmm_execrc>
 +-LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-ceph"
 ++LIVE_DISK_SNAPSHOTS="kvm-qcow2 kvm-ceph kvm-linstor_un"
 +</code>
 +<code>
 +systemctl restart opennebula opennebula-sunstone
 +</code>
 +===== Create datastores =====
 +<code>
 +su - oneadmin
 +</code>
 +<code>
 +cat > system-ds.conf <<EOT
 +NAME="linstor-system"
 +TYPE="SYSTEM_DS"
 +STORAGE_POOL="data"
 +AUTO_PLACE="2"
 +CHECKPOINT_AUTO_PLACE="1"
 +TM_MAD="linstor_un"
 +EOT
 +</code>
 +<code>
 +cat > images-ds.conf <<EOT
 +NAME="linstor-images"
 +TYPE="IMAGE_DS"
 +STORAGE_POOL="data"
 +AUTO_PLACE="2"
 +DISK_TYPE="BLOCK"
 +DS_MAD="linstor_un"
 +TM_MAD="linstor_un"
 +EOT
 +</code>
 +<code>
 +onedatastore create system-ds.conf
 +onedatastore create images-ds.conf
 +</code>
 +===== Configure oneadmin SSH =====
 +<code>
 +su - oneadmin
 +</code>
 +<code>
 +ssh-keygen
 +cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 +chown oneadmin:oneadmin ~/.ssh/authorized_keys
 +chmod 600 ~/.ssh/authorized_keys
 +</code>
 +<code>
 +cat << EOT > ~/.ssh/config
 +Host *
 +StrictHostKeyChecking no
 +ConnectTimeout 5
 +UserKnownHostsFile /dev/null
 +EOT
 +chmod 600 ~/.ssh/config
 +</code>
 +Now we need to copy it to all nodes. Set a password on the oneadmin user on all nodes first. When done you can disable the password on the back-end nodes by using the following command on all back-end nodes: passwd -d oneadmin
 +<code>
 +scp -r /var/lib/one/.ssh/ oneadmin@YOURHOST:/var/lib/one/
 +</code>
 +Check if you can login to all nodes as the oneadmin user. And do a reboot on the front-end node to make sure.
 +<code>
 +shutdown -r now
 +</code>
 +===== Move front-end VM to new cluster =====
 +I created the VM on a laptop and now I want to move it to a back-end node. In this case I move it to server1 and do not want to see it in Open Nebula.
 +
 +Shutdown the VM and copy the qcow2 file to /var/lib/libvirt/images on the back-end server.
 +
 +Create a VM with virt-manager using the import exsisting volume. Make sure you connect it to the mgmt bridge.
 +
 +Enable boot on startup and reboot your back-end server. The VM should start automatically. Check with virt-manager on that server.
open_nebula_kvm_with_drbd.1582461919.txt.gz · Last modified: by herwarth