====Initial OS Install=====
The document isn't as up to date as the similar document with ZFS as of 12/19/2020
===Partitioning===
If 4 drives are available use RAID 10, if 2 drives use RAID 1.
Create 2GB /boot standard partition on xfs (should be on a hardware R10, R1 or software R1)
Note: if using a software R1, install grub2 on both drives that will participate in the R1 array or the R1 part of the R10 array. (grub2-install /dev/sda and grub2-install /dev/sdb or whatever device name you have; also make sure both drives are in the BIOS boot order list if possible).
Create swap partition of 1x your RAM if under 16GB, 0.5x your RAM if 16GB or more (should be on a hardware R10, R1 or software R10, R1)
Create a 10GB / partition on xfs (should be on a hardware R10, R1 or software R10, R1)
With the remaining space create the mount point /var/lib/libvirt on xfs (should be on a hardware R10, R1 or software R10, R1).
Where possible (if extra disks are avaiable), create a set of software mirrored XFS volumes for backup purposes and NAS backup purposes. Mount it at /VG_BACKUPS
If you want to use LVM for your guests then create a 2GB XFS /boot paritition on RAID1, a 10GB + however many GB RAM you have (so you have 32GB create a 42-50GB partition) XFS / parition on LVM on RAID1 or 10 and a SWAP partition on LVM on RAID1 or 10. During setup make sure the Volume Group (VG) is as large as possible since we need it empty for snapshot purposes. The reason for the larger / partition due to RAM size is that the virtual guests will save their RAM states here on suspend. And once you're in the Virtual Machine Manager add your Logical Volume Group as a storage pool, name it VG.
===Network===
Setup network as you see fit, DHCP to start with is fine, this must be working and enabled for NTP to be configured.
Ideally use 1 NIC for access/management bridge and 1+ NICS for use as network bridge for virtual guests, or use bonding and VLANs and bridges (configure the bridge NICs later).
===Date/Time===
Select your timezone, enable NTP
===Software selection===
Minimal
Begin install and create your root password.
==Create limited user account and add to wheel group for sudo==
This is optional, if you do this then add sudo to all subsequent commands when logged in as your sudo user.
useradd example_user && passwd example_user
usermod -aG wheel example_user
==Install dependencies and vim==
dnf install vim
Logout of root and login using sudo user
==Disallow root login over SSH==
sudo vim /etc/ssh/sshd_config
then set
PermitRootLogin no
==Restart SSHD==
sudo systemctl restart sshd
====Install Packages====
After install and reboot do a dnf update then add basic Gnome install:
sudo dnf install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts vim tar gnome-disk-utility gnome-system-monitor firefox rsyslog
sudo systemctl enable rsyslog
sudo systemctl start rsyslog
If you want Gnome to load on reboot run the commands below (though you don’t need to if you are only going to use VNC for remote management)
sudo unlink /etc/systemd/system/default.target
sudo ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
Install Virtualization packages (virt-manager for a nice GUI for your virtual clients)
sudo dnf install libvirt virt-manager
==Add your sudo user to libvirt group==
https://computingforgeeks.com/use-virt-manager-as-non-root-user/
sudo usermod -aG libvirt example_user
Edit libvirtd.conf
sudo vim /etc/libvirt/libvirtd.conf
Uncomment the following:
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
You may also want to give the libvirt group r/w access to the /var/lib/libvirt/images directory so that you can scp remotely to and from using your sudo user since root ssh is disabled.
sudo chown root:libvirt /var/lib/libvirt/images
sudo chmod 771 /var/lib/libvirt/images
sudo chmod g+s /var/lib/libvirt/images
Restart libvirtd
sudo systemctl restart libvirtd
====Set System Variables====
===SSD Settings===
If you used SSD drives on MDADM (linux software RAID) then enable FSTRIM service for cleanup
sudo systemctl enable fstrim.timer
sudo systemctl start fstrim.timer
Check status of timer by showing systemd timers: systemctl list-timers
Check trim support by: lsblk --discard
Stop writing a timestamp every time a file is accessed
Edit the /etc/fstab file and replace all the defaults strings by defaults,noatime.
sudo vim /etc/fstab
For example:
/dev/mapper/rhel-root / xfs defaults 1 1
becomes:
/dev/mapper/rhel-root / xfs defaults,noatime 1 1
If using LVM enable Trim function by editing /etc/lvm/lvm.conf and change
issue_discards = 0
to
issue_discards = 1
Other notes for using SSDs: https://www.certdepot.net/rhel7-extend-life-ssd/
===Performance Settings===
Set the proper performance profile via tuned-adm:
sudo tuned-adm profile virtual-host
then check to make sure:
sudo tuned-adm list
This should adjust the swappiness, change to the deadline scheduler and other things.
==Manually Specify Swappiness==
By default swappiness is set to 10 with the virtual-host profile, if you really want to try to avoid using RAM set it to 10, though make sure you have enough RAM for all of your guests. You might want to set your virtual guests that run linux the same so they avoid swapping if posssible.
sudo vim /etc/sysctl.conf
Add the following:
vm.swappiness = 1
Disable KSM (memory paging feature for oversubscribing memory on similar virtual guests)
sudo systemctl stop ksmtuned
sudo systemctl stop ksm
sudo systemctl disable ksm
sudo systemctl disable ksmtuned
===Fix for Win10/2016+ BSOD or crashes===
I'm not sure of the cause but this is the fix, research and determine if a better option exists.
sudo vim /etc/modprobe.d/kvm.conf
Add the line:
sudo options kvm ignore_msrs=1
===VNC Server===
Install a VNC server so you can quickly manage the VMs remotely:
sudo dnf install tigervnc-server
sudo cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:8.service
Edit the VNC config file (replace with root)
sudo vim /etc/systemd/system/vncserver@:8.service
Example post edit:
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/bin/vncserver_wrapper example_user %i
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
Start VNC server to prompt for password, set a password different from your sudo user password, say no to view only password.
vncserver
Connect on port 5908 using your VNC viewer (preferably TigerVNC)
Note: If you connect via TigerVNC viewer it will show that your connection is insecure. This is because the certificates used aren't trusted, however TLS encryption should be active, you can verify this by pressing F8 when using TigerVNC viewer and checking connection info.
===Firewall===
Allow VNC server access
sudo firewall-cmd --permanent --zone=public --add-port=5908/tcp
sudo firewall-cmd --reload
sudo systemctl daemon-reload
sudo systemctl enable vncserver@:8.service
sudo systemctl start vncserver@:8.service
====Dell and LSI packages====
***Note: on older Dell servers this isn't supported anymore, definitely not R710/R410/R210 generation*** and as of Feb 2020 RHEL8 isn't yet supported by DSU or SRVADMIN\\
If you are using a Dell server and LSI RAID controller it is recommended that you download and install Dells OMSA and use Dell System Update to update things.
Dell: http://linux.dell.com/repo/hardware/dsu/
Set up the Dell OpenManage Repository at like this:
curl -O https://linux.dell.com/repo/hardware/dsu/bootstrap.cgi
sudo bash bootstrap.cgi
sudo dnf install srvadmin-all dell-system-update
run dsu to update firmware/bios/etc
sudo dsu
Note: RHEL 8 has removed libssh2, you need to use epel to get it
sudo dnf install epel-release
sudo dnf install libssh2
Note: you can login to OMSA from the local computer at: https://localhost:1311 \\
Login as root otherwise you won't be able to change things.
====Network Bridge====
Network Bridge for virtual clients. It is recommended to have a dedicated (or multiple dedicated) bridge(s) for your clients(if more than 1 bridge, separate bridges need to be vlan’ed or go to separate networks or you’ll create a loop), lag groups for better throughput is good too. Also, use a separate network card for management and file transfers that won’t interfere with bridged network traffic:
===Creating Network Initscripts===
Use a consistent bridge name across hosts to move VMs is easy, remember case sensitive to! Recommend naming them BR0, BR1, BR2, BRX. Please try to have consistency across hosts so if you have two bridges on 1 host, have 2 bridges on all others configured the same way, connected to the same switched network.
To find the HWADDR do this: ethtool -P
or
ip link
if-name is the name of the Ethernet interface, normally eth0 or em0 or eno1.
In the /etc/sysconfig/network-scripts directory it is necessary to create 2 config files. The first (ifcfg-eth0) (or ifcfg-em1 or em0 or eth0 etc) defines your physical network interface, and says that it will be part of a bridge:
sudo vim /etc/sysconfig/network-scripts/ifcfg-eno1
Configure as so:
DEVICE=eno1
HWADDR=00:16:76:D6:C9:45 (Use you actual HWADDR, or mac address here)
ONBOOT=yes
BRIDGE=br0
The second config file (ifcfg-br0) defines the bridge device:
sudo vim /etc/sysconfig/network-scripts/ifcfg-br0
Configure as so:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
ONBOOT=yes
DELAY=2
WARNING: The line TYPE=Bridge is case-sensitive - it must have uppercase 'B' and lower case 'ridge'
Also, if you have only 1 Ethernet adapter you will want to give the Bridge device an IP on your LAN for management, see static IP example below.
After changing this restart networking (or simply reboot) .
nmcli connection reload && systemctl restart NetworkManager
Example of ifcfg-br0 for static IP:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
ONBOOT=yes
DELAY=2
IPADDR=10.222.190.249
NETWORK=10.222.190.0
NETMASK=255.255.255.0
GATEWAY=10.222.190.250
DNS1=208.67.220.220
DNS2=208.67.222.222
==== NFS Server ====
This is used if you are going to copy virtual guests between hosts
=== NFS Setup ===
Install NFS packages and enable services
sudo dnf install nfs-utils libnfsidmap
sudo systemctl enable rpcbind
sudo systemctl enable nfs-server
sudo systemctl start rpcbind
sudo systemctl start nfs-server
sudo systemctl start rpc-statd
sudo systemctl start nfs-idmapd
make a directory called VG_BACKUPS in /var/lib/libvirt/images
mkdir /var/lib/libvirt/images/VG_BACKUPS
We have to modify “/etc/exports“ file to make an entry of directory “/var/lib/libvirt/images” that you want to share .
sudo vim /etc/exports
Example of exports file
/var/lib/libvirt/images 172.21.21.0/24(rw,sync,no_root_squash,fsid=)
/var/lib/libvirt/images: this is the shared directory\\
172.21.21.0/24: this is the subnet that we want to allow access to the NFS share\\
rw: read/write permission to shared folder\\
sync: all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for\\
no_root_squash: By default, any file request made by user root on the client machine is treated as by user nobody on the server.(Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.\\
fsid=somenumber gives the mount a unique id so that mounts are more easily managed by hosts. I recommend using the first and last octets of the host static IP as the “somenumber”
Export the the NFS share
sudo exportfs -r
We need to configure firewall on NFS server to allow client servers to access NFS shares. To do that, run the following commands on the NFS server.
sudo firewall-cmd --permanent --zone public --add-service mountd
sudo firewall-cmd --permanent --zone public --add-service rpc-bind
sudo firewall-cmd --permanent --zone public --add-service nfs
sudo firewall-cmd --reload
=== Configure NFS Clients ===
sudo dnf install nfs-utils libnfsidmap
sudo systemctl enable rpcbind
sudo systemctl enable nfs-server
sudo systemctl start rpcbind
sudo systemctl start nfs-server
sudo systemctl start rpc-statd
sudo systemctl start nfs-idmapd
Set SELinux options
sudo setsebool -P nfs_export_all_rw 1
sudo setsebool -P virt_use_nfs=on
Create mount points for NFS shares
sudo mkdir /mnt/ VHSRV02/VG_IMAGES
(where VHSRV02 is the remote computer name, make one for each mount you will have). Client FSTAB entry to mount NFS share:
172.18.18.24:/var/lib/libvirt/images /mnt/VHSRV02/VG_IMAGES nfs4 noauto,nofail,x-systemd.automount,_netdev,x-systemd.device-timeout=14,proto=tcp,rsize=131072,wsize=131072 0 0
192.168.21.14:/VG_BACKUPS /mnt/VHSRV02/VG_BACKUPS nfs4 noauto,nofail,x-systemd.automount,_netdev,x-systemd.device-timeout=14,wsize=131072,rsize=131072 0 0
Virtual Guest Tips:
In Windows Guests, set their page/swap file to as small a size as possible, or try system managed.
On Windows Guests, use Virtio NICs and Virtio SCSI (not just Virtio for HDD). For the HDD set cache mode to none and threads to native.
On all linux guests use virtio NICS Virtio SCSI (not just Virtio for HDD). For the HDD set the cache mode to none and threads to native.
In linux guests (that use systemd), change the swappiness to reduce memory swaps by editing /etc/sysctl.conf and adding the line: vm.swappiness=10 then reboot. Check what your swappiness is by running: cat /proc/sys/vm/swappiness
Limit swap use
Edit the /etc/sysctl.conf file and paste the following lines:
vm.swappiness=1
vm.vfs_cache_pressure=50
Make the changes active:
# sysctl -p
Use a SSD-friendly I/O scheduler
Edit the /etc/default/grub file and add elevator=deadline at the end of the GRUB_CMDLINE_LINUX variable.
Make the change active:
# grub2-mkconfig -o /boot/grub2/grub.cfg
On all virtual guests, set the CPU to “copy host CPU configuration” and set he topology manually, set leave the sockets and threads at 1 and use cores to allocate virtual CPUs.
====Common Issues====
===VNC Server===
If power is lost or server isn't shutdown cleanly then the VNC server service might not restart on boot and manually restarting the VNC service fails. Normally the fix is to delete the lock files in the /tmp folder.
https://access.redhat.com/discussions/1149233
Example:
[root@xxx ~]# ls /tmp/.X
.X0-lock .X1-lock .X11-unix/ .X2-lock
[root@xxx ~]# rm -Rf /tmp/.X0-lock
[root@xxx ~]# rm -Rf /tmp/.X1-lock
[root@xxx ~]# rm -Rf /tmp/.X11-unix
[root@xxx ~]# rm -Rf /tmp/.X2-lock
And when connecting be sure you're connecting via port 5908 if you followed the setup according to this document, so... ip.add.r.ess:5908 (otherwise it defaults to 5900).
===Network===
If your virtual host becomes unplugged from a network switch then all network interfaces (bonds, bridges, vlans and vnets) will go down. On plugging it back in the bonds, bridges and vlans will come back up automatically but the vnets won't. This means your virtual guests won't have network access until your shut them down then back on. Using ip link setup vnet up
seems like it brings the interface up and the guest can ping out but devices on the other side of the vnet interface can't seem to get in. Still working on an automated way to fix this. Nice IP command cheatsheet from Redhat: [[https://access.redhat.com/sites/default/files/attachments/rh_ip_command_cheatsheet_1214_jcs_print.pdf]]
===Shutting Down, Boot, Startup===
I am still unclear if the guests cleanly shutdown when the host is issued a shutdown -r now, there is a config file
/etc/sysconfig/libvirt-guests
where options can be set on what to do but I haven't tested them. Here is a link to some info from Redhat: [[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-manipulating_the_libvirt_guests_configuration_settings]]
Also, if you shutdown your virtual guests to do dnf updates on the host, if any of the guests are set to autoboot at startup then will automatically start after an update to libvirt is installed. They will also do this if you restart libvirtd.