Red Hat Virtualization (RHV) Definitions, Requirements, and Installation

Mindwatering Incorporated

Author: Tripp W Black

Created: 10/07 at 04:33 PM

 

Category:
Linux
RH V

Definitions:
Host: Hardware that runs the virtualization as Layer 2 (add-on service) or Layer 1, integrated Hypervisor OS.

Guest: VM or Container (Pod of Containers actually) running on the Host

Hypervisor: Software that partitions the host hardware into multiple VMs (CPU, Memory, Storage/Disk, and Networking) and runs them

HA: High Availability - If one host goes offline (isolation or hardware failure), the VMs can be re-started on the remaining "up" hosts.


Red Hat Virtualization Concepts/Definitions:
RHV - Open source virtualization platform that provides centralized management of hosts, VMs, and VDs (virtual desktops) across an enterprise datacenter. It consists of three major components:
- RHV M - Manager Administrative Portal
- - "Hosted Engine" VM
- - Administrative Portal provides controls for management of the physical and virtual resources in a RHV environment, RHV-M also exposes the REST APIs and SDKs for various programming languages.
- Physical hosts - RVH H (RVH self-hosted engine/hypervisors - type 1)
- - kernel-based KVM hypervisor, requiring hardware virtualization extensions supporting Intel-VT-x or AMD-V and also the No eXecute (NX) flag. IBM POWER8 is also supported.
- - Anaconda for installation, LVM for image management, and the RHV-H's web console for local administration and monitoring
- Storage domains
- - Data domain is a centrally accessed repository for VM disks and images, ISO files, and other data accessible to all hosts in a RHV data-center. NFS, iSCSI, and others are used for storage domains.

Other components:
- Remove Viewer
- - client administrative workstation viewer to access consoles of RHV virtual machines. On RHEL client systems, it is installed with the spice-xpi package which installs the Remote Viewer and its required plugins.

Minimum Host Requirements:
- 2 GB RAM, and up to 4 TB
- The Intel-VT-x or AMD-V with the NX flag on CPU
- Minimal storage is 55 GB
- /var/tmp at least 5 GB
Minimum Host Network Requirements:
- 1 NIC @ 1 Gbps
- - However min 3 recommended: 1 for mgmt traffic, 1 for VM guest traffic, and 1 for data domain storage traffic
- External DNS and NTP servers not allowed on VMs of the RHV since its hosts come up before their VMs and have to have forward and reverse DNS entries.
- RVH-H firewall is auto-configured for required network services

Storage Domain:
- Also, a RHEV Storage Domain. It is represented as a Volume Group (VG). Each VM w/in the VG has it's one LV (Logical Volume) which becomes the VM's disk. Performance degrades with high numbers of LGs in VGs (300+), so the soft limit is 300. To have more scalability, you create new Storage Domains (VGs). See RH Technote: 441203 for performance limits. When a snapshot is created, a new LV is created for the VM with the Storage Domain PV.
- Types:
- - Data Domain: stores the hard disk images of the VMs (the LVs) and VM templates. Data Domains can utilize NFS, iSCSI, FCP, GlusterFS (deprecated), and POSIX storage. A data domain cannot be shared between data centers.
- - Export Domain: (deprecated) stores VM LVs disk images and VM templates for transfer between data centers, and where backups of VMs are copied. Export Domains are NFS. Multiple datacenters can access a single export domain, but can only be used by one at a time.
- - ISO Domain: (deprecated) stores disk images for installations

Data Domain: hosted_storage
Management Logical Network: ovirtmgmt
DataCenter: default
Cluster: default

RHV-H = Host - Host Types:
Engine Host: Host w/Manager VM. Two hosts running manager (engines) = HA capable
Guest Host: Host running VMs

Two ways to get to a working RHV-H host/hypervisor:
- RHV-H ISO (or other methods)
- RHEL linux host with the Virtualization repository packages and modules added.

Hosts talk to the RHV-M VM (or separate server) via the Virtual Desktop and Server Manager (VDSM)
- Monitors memory, storage, and networks
- Creates, migrates, and destroys VMs

RHV-H - Red Hat Virtualization Host:
- Is a standalone minimal operating system based on RHEL
- Includes a graphical web admin management interface
- Installs via ISO file, USB storage, PXE network distribution, or by cloning

RHHI-V - Red Hat Hyper-converged Infrastructure for Virtualization
- Uses "self-hosted" engine
- Gluster Storage


RHV Installation Methods:
- Standalone = Manager separate (not hosted on the hosts = not self hosted)
or
- Self-Hosted Engine = Manager runs on the first host after host is installed first.

Host Graphical UI:
https://rhvhosta.mindwatering.net:9090
- accept the self-cert

When RHV-H hosts are installed (e.g. ISO) manually, they have to be subscription registered and enabled:
[root@hostname ~]# subscription-manager repos --enable=rhel-8-server-rhvh-4-rpms


RHV-M - Red Hat Virtualization Manager:
- Integrates w/various Directory services (JBOSS --> LDAP) for user management
- Manages physical and virtual resources in a RHV environment
- Uses local PostgreSQL db for config engine (engine) and the data warehouse (ovirt-engine-history) databases

Manager Standalone Installation Order:
1. Manager installed on separate server
2. Install hosts (min. 2 for HA)
3. Connect hosts to Manager
4. Attach storage accessible to all hosts

Self-Hosted Engine Installation Order:
1. Install first self-hosted engine host
- subscribe host to entitlements for RHV and enable software repos
- confirm DNS forward and reverse working
2. Create Manager VM on host
- create via the host graphical web console
or
- create on the host via: hosted-engine --deploy command. The GUI is recommended method.
3. Attach storage accessible to all hosts
- Back-end storage is typically NFS or iSCSI. The storage attached becomes the "Default" data center and the "default" cluster. This default storage contains the LV/storage of the RHV-M VM created.
- This also illustrates that the actual creation of the RHV-M VM is actually after the storage
4. Install additional self-hosted engine hosts (min. 2 for HA)

RVH-M Minimum Requirements:
- CPU: 2 core, quad core recommended
- Memory: 16 GB recommended, but can run with 4 GB if data warehouse is not installed, and memory not being consumed by existing processes
- Storage: 25 GB locally accessible/writable, but 50 GB or more recommended
- Network: 1 NIC, 1 Gbps min.

RHV-M Administration Portal:
https://rhvmgr.mindwatering.net
or
https://rhvmgr.mindwatering.net/ovirt-engine/webadmin/
User: admin
Password: <pwd set before clicking start button>

Verification of RHV-M appliance services:
$ ssh root@rhvmgr.mindwatering.net
<enter root pwd>
[root@rhvmgr ~]# host rhvmgr.mindwatering.com
<view output - confirm name and IP, note IP for next validation>
[root@rhvmgr ~]# host -t PTR <IP shown above>
<view output - confirm IP resolves to name>
[root@rhvmgr ~]# ip add show eth0
<view output of NIC config - confirm IP, broadcast, mtu, and up>
[root@rhvmgr ~]# free
<view output - confirm memory and swap sufficient to minimums>
[root@rhvmgr ~]# lscpu | grep 'CPU(s)'
<view output - confirm number of cores>
[root@rhvmgr ~]# grep -A1 localhost /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.conf | grep -v '^#'
<view output - confirm PostgreSQL db listening and note ports>
[root@rhvmgr ~]# systemctl is-active ovirt-engine.service
<view output - confirm says "active">
[root@rhvmgr ~]# systemctl is-enabled ovirt-engine.service
<view output - confirm says "enabled">

Download of RVH-M CA root certificate for browser trust/install:
http://rhvmgr.mindwatering.net/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

If uploads (ISOs, etc) fail on upload, did the browser get the Manager certificate installed?
e.g. "Connection to ovirt-imageio-proxy service has failed. Make sure the service is installed, configured, and ovirt-engine-certificate is registered as a valid CA in the browser."

NFS Export Configuration for Storage Domains
- storage server is not allowed to be one of the VMs hosted as it must be up before hosts are started
- read/write mode
- edit config file as either:
- - /etc/exports
or
- - /etc/exports.d/rhv.exports
- ownership:
- - top-level directory owned by vdsm w/UID 36 and by group kvm GID 36, with 755 access:
- - - user vdsm has rwx access (7)
- - - group kvm has rx access (6)
- - - other has rx access only (5)
- ensure NFS server service running (e.g. nfs-server.service enabled and active/running)

Verification of Export:
$ ssh root@rvhstor1.mindwatering.net
<enter root pwd>
[root@rvhstor1 ~]# systemctl is-active nfs-server.service
<view output - confirm active>
[root@rvhstor1 ~]# systemctl is-enabled nfs-server.service
<view output - confirm enabled>
[root@rvhstor1 ~]# firewall-cmd --get-active-zones
<view output - e.g. public>
[root@rvhstor1 ~]# firewall-cmd --list-all --zone=public
<view output - verify ports and services for nfs are included>
[root@rvhstor1 ~]# exportfs
<view output - verify that the /exports/<datanames> are listed and match what will be used in RHV e.g. hosted_engine>
[root@rvhstor1 ~]# ls -ld /exports/hosted_engine/
<view output - check permissions and ownership - e.g. drwxr-xr-x. 3 vdsm kvm 76 Mon 12 12:34 /exports/hosted_engine/>

Console View Apps for Administrative workstations using GUI consoles:
- Linux:
- - virt-viewer
- - $ sudo yum -y install virt-viewer
Note: documentation also says installed via spice-xpi package.

- MS Windows:
- - Download both the viewer and the USB device add-on from the RVH-M appliance:
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/virt-viewer-x64.msi (64-bit)
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/usbdk-x64.msi (64-bit)
- Viewer has headless mode and VNC option/use
- Viewer SPICE supports file transfer and clipboard copy-and-paste

Creating VM Notes:
- Disk Interface Types:
- - IDE: oldest and slowest, use for older OS and disk compatibility, not recommended
- - VirtIO: /dev/vdX drives, must faster than IDE, but more limited than SCSI. Use when advanced features not needed.
- - VirtIO-SCSI: /dev/sdX drives, improves scalability, replaces the virtio-blk driver, can connect directly to SCSI LUNs and handles 100s of devices on single controller
- Allocation Policy - both thin and think options
- - Same as vSphere, thick all of disk formatted and allocated, thin/sparse for only initial space needed, rest is allocated as used.
- - Thin may be a little slower as disk usage requires new allocations to be zeroed before new data blocks are written.

- Boot Notes:
- Run Once parameters persist THROUGH reboots intentionally because it often needs to for software installation. You must shutdown VM before the run once defaults back to a normal boot.


Data Centers and Clusters:
Data center:
- Top level organizational object in RHV
- Contains all the physical and logical resources in a single, managed, virtual environments; includes clusters, hosts, logical networks, and storage domains
- All hosts and clusters in the data center share the same storage and networks
- First auto-created data center is named "default"
- Future data centers created in the Administration Portal:
- - Administrative Portal --> Compute (left menu) --> Data Centers (menu tab) --> New (button)
- - - Name - Enter a unique sensible name
- - - Storage Type - If you change Type to Local, you will create a single host datacenter that can only use local storage
- - - Compatibility - Choose current (default) if all the clusters and hosts are the current version; otherwise, choose the current version of the hosts that are not yet upgraded
- - - Click OK

Data Center - Guide Me
- Wizard that guides creation of new clusters, hosts, and storage, attaching existing storage, configuring existing storage, and configuring ISO libraries or reconfiguring an ISO library
- A new datacenter is "Uninitialized" until first host and storage are added and the RHV-M can confirm they are usable; then the status is changed to "Up"

Cluster:
- Group of hosts in a single data center with the same architecture and CPU model. A cluster is a "migration domain", such that VMs can live migrate to only other machines in "this" cluster
- Clusters have a CPU Type family feature set that is shared by all hosts in the cluster - not necessarily the same exact CPU just same generation and family
- All clusters must be configured with same resources including logical networks, storage domains.
- Stopped VMs can be cold migrated w/in the data center clusters w/o matching architecture or CPU.
- Networking can be standard "Linux Bridge", or the newer SDN Open vSwitch, "OVS (Experimental)" option, which is not yet supported for production, but popular for use anyway
- First auto-created cluster is named "default"
- Future clusters created in the Administration Portal:
Note: Create any MAC pool before creating cluster
- - Administrative Portal --> Administration (left menu) --> Configure (menu tab) --> MAC Address Pools (left menu list)
- - - Click Add (button)
- - - - Name: Enter a unique sensible name
- - - - MAC Address Ranges: Enter the starting and ending ranges in the two fields
- - - - Click OK (closes the pool window)
- - - Click Close (closes the Configure window)
- - Administrative Portal --> Compute (left menu) --> Clusters (menu tab) --> New (button)
- - - General (tab):
- - - - Datacenter - Select the data center for the new cluster
- - - - Name - Enter a unique sensible name
- - - - Management Network - If more than one management network select the correct one for the cluster/data center. The default auto-created management network to select is typically, "ovirtmgmt"
- - - - CPU Architecture - x86_64 typically
- - - - CPU Type - The cluster base version. Same as vSphere e.g. "Intel Westmere Family"
- - - - API Compatibility Version - Take default unless need to support third-party tools using an older version
- - - - Switch Type - Linux Bridge
- - - - Firewall Type - firewalld
- - - - Default Network Provider: No Default Provider
- - - - Maximum Log Memory Threshold: 95%
- - - - Enable Virt Service: Checked typically
- - - Optimization (tab)
- - - - Memory page sharing threshold, CPU thread handling, and memory ballooning settings
- - - Migration Policy (tab)
- - - - Rules for determining when existing VMs migrate automatically between hosts (specifically, Load balancing)
- - - Scheduling Policy (tab)
- - - - Rules for selecting which host new machines are placed/started
- - - Console (tab)
- - - - Console connection protocols and proxy configuration (e.g. default SPICE)
- - - Fencing (tab)
- - - - Actions for an isolated or failed/crashed host, typically to handle attached storage to limit corruption on failure
- - - MAC Address Pool
- - - - Custom MACs for use in this cluster instead of using the Data Center pool
- - - Click OK

General (tab) Other Info:
- If using external random number generator for "entropy", you can set cluster to use /dev/hwrng instead of /dev/urandom. If selected,all the hosts must have that hardware-based source selected and configured.

Hosts with HA and Migration:
- Pinning allows VMs to only run on a specific host. They cannot be live migrated and will not auto-migrate when its host goes into maintenance mode -- it gets shutdown.
- - To find Pinned VMs on Host: Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> open Host --> Details (screen) --> VMs (tab) --> Pinned to Host
- If the host is the current Storage Pool Manager (SPM), the role is migrated to another host in the cluster
- See Scaling RHV --> Maintenance mode further below


User Accounts and Roles:
Summary:
- RHV authenticates users based on information from an external LDAP server
- The initial local domain is called "internal" which contains the local user accounts (e.g. admin@local)
- Additional local users for AAP or other utility can be created with the ovirt-aaa-jdbc-tool or the Administrative Portal
- Directory support: OpenLDAP, RHEL Identity Manager (iDM), or MS AD, and others
- Directory addition adds ability for LDAP users to authenticate
- RHV uses roles that authorize them for access
- - Directory users have no authorization roles, they must be added

Under the covers:
- ovirt-engine-extension-aaa-ldap package provides LDAP support for OpenLDAP, iDM, and the others
- ovirt-engine-extension-aaa-ldap-setup package provides LDAP integration w/in RHV-M
- These must be installed in the RHV-M VM

Add External LDAP RHEL iDM:
- iDM is based on upsteam FreeIPA project
- Gather LDAP source info:
- - FQDN of the LDAP server or its VIP
- - Public CA certificate in PEM format
- - Existing LDAP account (service account id and pwd) to use for authentication to the LDAP iDM
- - Base DN, User DN, and any other filter required
- Run ovirt-engine-extension-aaa-ldap-setup
- - Available LDAP Implementation: choose #6 - IPA
- - Use DNS: click <enter>, for [Yes] option
- - Available Policy method: #1 - Single Server
- - Host: mwid.mindwatering.net
- - Protocol to use: click <enter>, for [startTLS]
- - Method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File
- - URL: http://mwid.mindwatering.net/ipa/config/ca.crt
- - Search user DN: uid=rhvadmin,cn=users,cn=accounts,dc=mindwatering,dc=net
- - Enter search user password: ********
- - Enter base DN: <enter> (Typically is correct as parsed from the User DN e.g. dc=mindwatering,dc=net)
- - Profile name visible to users: mindwatering.net
- - Prompt for LDAP user name: rhvadmin ..., Prompt for LDAP user pwd: ***********
- After finishing set-up, restart the service w/:
# systemctl restart ovirt-engine

Add External OpenLDAP:
The set-up steps for AD, OpenLDAP, etc are very similar to steps above.

Managing User Access Summary:
- Access/authorization model: users, actions, and objects
- Actions are tasks on objects made by users (e.g. user stopping a VM)
- Actions have a permission.
- To simply permissions have been lumped into related user-type or object-based roles. (e.g. SuperUser (admin) or PowerUserRole or HostAdmin)
- - For example, HostAdmin on a cluster gives administrative access to all hosts w/in that specific cluster only
- Users can be assigned roles over the entire data center, or just one object w/in the datacenter (e.g. a VM)
- Default (system) roles cannot be changed or removed
- Object Inheritance:
System --> Data Center --> User, Storage Domain, and Cluster
--> User
--> Storage Domain --> Template
--> Cluster --> Host and VM

User Roll Types:
- Administrative Role: Users w/role can access the Administrative Portal
- - Use these roles to better manage user access and to delegate administrative authority. For example, assign SystemAdmin to specific users without giving them access to the admin@internal account. Users with roles can be properly tracked and managed for compliance.
- - Assign less comprehensive roles to appropriate users in order to offload administrative tasks. The DataCenterAdmin, ClusterAdmin, and PowerUserRole roles are useful for this purpose.
- User Role: Users w/role can access the VM Portal
- - users w/ UserRole can only see the "basic mode" of the VM Portal

Predefined User Roles:
Administrative Roles (Basic)
- SuperUser:
Role gives user full permissions across all objects and levels in your RHV environment. The admin@internal user has this role, and should be given only to the architects and engineers who create and manage the RHV

- DataCenterAdmin:
Role gives user administrative permissions across all objects in a data center, except for storage which is managed by StorageAdmin. Users with this administration can manage assigned data center objects but cannot create another or manage one not assigned

- ClusterAdmin:
Role gives user administrative permissions for all resources in a specific cluster. Users with role can administer the assigned clusters but cannot create another or manage one not assigned

Administrative Roles (Advanced)
- TemplateAdmin:
Role gives users ability to create, delete, and configure templates w/in storage domains.

- StorageAdmin:
Role gives users ability to create, delete, and manage assigned storage domains.

- HostAdmin:
Role gives users ability to create, remove, configure, and manage a host.

- NetworkAdmin:
Role gives users ability to create, remove, and edit networks of an assigned data center or cluster.

- GlusterAdmin
Role represents the permissions required for a Red Hat Gluster Storage administrator. Users with this role can create, remove, and manage Gluster storage volumes.

- VmImporterExporter
Role gives users ability to import and export virtual machines.

User Roles (Basic)
- UserRole
This role allows users to log in to the VM Portal. This role allows the user to access and use assigned virtual machines, including checking their state, and viewing virtual machine details. This role does not allow the user to administer their assigned virtual machines.

- PowerUserRole
This role gives the user permission to manage and create virtual machines and templates at their assigned level. Users with this role assigned at a data center level can create virtual machines and templates in the data center. This role allows users to self-service their own virtual machines. The PowerUserRole includes/inherits/adds the user to the UserVmManager role when you create a VM. However, an admin can remove the "lower" role and remove this access if he/she wishes

- UserVmManager
This role allows users to manage virtual machines, and to create and use snapshots for the VMs they are assigned, to edit the VMs' configuration, and delete the VMs. If a user creates a virtual machine using the VM Portal, that user is automatically assigned this role on the new virtual machine.

User Roles (Advanced)
- UserTemplateBasedVm
This role gives the user limited privileges to use only the virtual machine templates. Users with this role assigned can create virtual machines based on templates.

- DiskOperator
This role gives the user privileges to manage virtual disks. Users with this role assigned can use, view, and edit virtual disks.

- VmCreator
This role gives the user permission to create virtual machines using the User Portal. Users with this role assigned can create virtual machines using VM Portal.

- TemplateCreator
This role gives the user privileges to create, edit, manage, and remove templates. Users with this role assigned can create, remove, and edit templates.

- DiskCreator
This role gives the user permission to create, edit, manage, and remove virtual disks. Users with this role can create, remove, manage, and edit virtual disks within the assigned part of the environment.

- TemplateOwner
This role gives the user privileges to edit and remove templates, as well as assign user permissions for templates. It is automatically assigned to the user who creates a template.

- VnicProfileUser
This role gives the user permission to attach or detach network interfaces. Users with this role can attach or detach network interfaces from logical networks.


Role Assignment to Users:
- Assign system-wide rules using the Administration Portal:
--> Administration (menu) --> Configure (menu tab) --> System Permissions --> Add (button)
- Assign object-scoped rules at "that" object using the Administration Portal:
- - Example for data center access:
--> Administration (menu) --> Compute (menu tab) --> Data Centers --> Default (opened) --> Permissions (tab) --> Add (button) --> Complete the Add Permission to User dialog
- To remove a role, a user with the SuperUser role can return back to the same location and clear the box for the user in the dialog, and click OK.

Reset the Internal Admin account:
[root@rvhmgr ~]# ovirt-aaa-jdbc-tool user password-reset admin --password-valid-to="2026-01-01 12:00:00Z"
<enter new pwd>
<reenter new pwd>

Unlock the Internal Admin account (from too many failures):
[root@rvhmgr ~]# ovirt-aaa-jdbc-tool user unlock admin


Scaling RHV Infrastructure:
Summary:
- Add to increase capacity, or remove to reduce unneeded cluster capacity
- Place hosts in to Maintenance Mode for any event that might cause VDSM to stop working on the RVH-H. e.g. host reboot, network issue repair, or storage issue repair

Host Maintenance Mode:
- Host status changes to: Preparing for Maintenance once selected, and Maintenance once achieved
- VDSM provides communication between RHV-M and the hosts, and maintenance mode disables engine health checking. VDSM continues to run on the RHV-H while in maintenance mode.
- Causes VMs to be migrated to another host in the cluster (assuming resources available on the other hosts)
- Pinned VMs are shutdown.
- If the current host is the Storage Pool Manager (SPM), its SPM role is given to another host in the cluster.
- If the host was the last host active in the data center, the data center will also be placed into maintenance mode.

Data Center Maintenance Mode:
- Placing a data center in storage mode, will place all storage domains into maintenance mode.
- The SPM has no tasks to manage.
- The data center outputs no logs. Logs restart when the master storage domain becomes active again.

Moving a Host between Clusters:
- The host does NOT have to be removed and re-added from the RVH-M Administration Portal.
- Place the host in Maintenance:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Switch the cluster:
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> Edit (button).
- - Host Cluster drop down: Select new cluster in same or different data center. Click OK to exit, click OK to confirm move.
- Activate the host from Maintenance:
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Activate.

Removing a Host:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Wait until Remove button become active. Verify the host is still selected, click Remove, and OK to confirm.
- The removal only removes its cluster and data center associations in the RHV engine database of the RHV-M. The host is not wiped.
- If desired, the host can be added to any existing cluster where it meets the CPU family and version criteria.

Adding a Host:
- Local storage domains and networks are added to the host when added to a cluster.
- The host firewalld rules are auto updated by the RHV-M.
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> click New (button)
- - Name: Specify a unique meaningful name
- - Credentials: Either SSH Login Username and Password or public key access (preferred)
- - Hostname: Enter host FQDN

Automated Host Provision - Kickstart:
- Requires a network installation server (Kickstart server) with PXE (Pre-boot eXecution Environment), TFTP, and a shared Kickstart file to start an installation by booting onto the network
- Requires host motherboard to be set-up/allow PXE boot
- Requires host NIC that supports PXE boot
- The Kickstart PXE server must have:
- - DHCP server to handle the initial communication, the network configuration (DHCP, and the TFTP server location for the usable boot image
- - TFTP server to provide boot images with command line options to start the installer
- - HTTTP, FTP, and/or NFS server to provide the installation media and the Kickstart file for installation
- UEFI-based boot firmware require additional extra files from the "shim" and "grub2-efi" packages, and a different configuration file
- - Instructions are in the RHEL 8 Installation Guide: Configuring a TFTP Server for UEFI-based AMD64 and Intel 64 Clients

PXE Boot Communication Summary:
- At boot, the client's network interface card broadcasts a DHCPDISCOVER packet extended with PXE-specific options. A DHCP server on the network replies with a DHCPOFFER, giving the client information about the PXE server and offering an IP address. When the client responds with a DHCPREQUEST, the server sends a DHCPACK with the Trivial FTP (TFTP) server URL of a file that can boot the client into an installer program.
- The client downloads the file from the TFTP server (frequently the same system as the DHCP server), verifies the file using a checksum, and loads the file. Typically, the file is a network boot loader called pxelinux.0. This boot loader accesses a configuration file on the TFTP server that tells it how to download and start the RHV-H installer, and how to locate the Kickstart file on an HTTP, FTP, or NFS server. After verification, the files are used to boot the client.
- See MW Support article: RHEL-based PXE Kickstart Network Server Setup


Managing RHV Networks:
RHV Networks Summary:
- RHV configures logical networks that segregate different types of network traffic onto separate VLANs on physical networks for improved security and performance.
- - Example: Separate VLANs for management, storage, and VM guest traffic
- Logical networks general types:
- - VM Network (for cluster VMs)
- - Infrastructure Network (communication between RHV-M and the RHV hosts, not connected to VMs, with require/have no created Linux bridge on RHV hosts)
- Logical networks defined in the data center and assigned to one or more clusters. Multiple clusters networks are typical to provide communication between VMs in different clusters.
- Logical networks have unique name, unique VLAN tag (VLAN ID number), are mapped to a virtual vNIC, and configured with Quality of Service (QoS) and bandwidth limiting settings. If attached to same NIC on the hosts, multiple logical networks may have the same label (e.g. internal, client/department id, etc.).
- Logical network labels may consist of only upper and lowercase letters, underscores, and hyphens. Adding a label, causes automatic attachment of all hosts in cluster(s) having the logical network to NICs. Removing a label from a logical network removes the logical network(s) from all hosts with that label.
- Software defined Linux bridge is created, per logical network, on the RHV host that maps/provides connectivity between the vNICs and physical host NICs.
- VMs have vNICs assigned to a VM network; the host uses a Linux bridge to connect the VM Network to one of its NICs.
- Infrastructure VMs are created at the cluster level and indicate what type of traffic is carried. Each host in the cluster will have the host's physical NID configured for each infrastructure network.
- 1 GbE is typically sufficient for management network and the display network.
- 10 or 40 GbE is recommended for migration and storage networks, aggregate through NIC bonding or teaming

Logical Network Types:
- Management
This network role facilitates VDSM communication between the RHV-M and the RHV hosts. By default, it is created during the RHV-M engine deployment and named ovirtmgmt. It is the only logical network created automatically; all others are created according to environment requirements.

- Display
This network role is assigned to a network to carry the virtual machine display (SPICE or VNC) traffic from the Administration or VM Portal to the host running the VM. The RHV host then accesses the VM console using internal services. Display networks are not connected to virtual machine vNICs.

- VM network
Any logical network designated as a VM network carries network traffic relevant to the virtual machine network. This network is used for traffic created by VM applications and connects to VM vNICs. If applications require public access, this network must be configured to access appropriate routing and the public gateway.

- Storage
A storage network provides private access for storage traffic from RHV hosts to storage servers. Multiple storage networks can be created to further segregate file system based (NFS or POSIX) from block based (iSCSI or FCoE) traffic, to allow different performance tuning for each type. Jumbo Frames are commonly configured on storage networks. Storage networks are not a network role, but are configured to isolated storage traffic to separate VLANs or physical NICs for performance tuning and QoS. Storage networks are not connected to virtual machine vNICs.

- Migration
This network role is assigned to handle virtual machines migration traffic between RHV hosts. Assigning a dedicated non-routed migration network ensures that the management network does not lose connection to hypervisors during network-saturating VM migrations.

- Gluster
This network role is assigned to provide traffic from Red Hat Gluster Servers to GlusterFS storage clusters.

- Fencing
Although not a network role, creating a network for isolating fencing requests ensure that this critical requests are not missed. RHV-M does not perform host fencing itself but sends fence requests to the appropriate host to execute the fencing command.

Required vs. Optional Networks:
- When created, logical networks may be designated as Required at the cluster level. By default, new logical networks are added to clusters as required networks. Required networks must be connected to every host in the cluster, and are expected to always be operational.
- When a required network becomes nonoperational for a host, that host's virtual machines are migrated to another cluster host, as specified by the current cluster migration policy. Mission-critical workloads should be configured to use required networks.
- Logical networks that are not designated as required are regarded as optional. Optional networks may be implemented only on the hosts that will use them. The presence or absence of optional networks does not affect the host's operational status.
- When an optional network becomes nonoperational for a host, that host's virtual machines that were using that network are not migrated to another host. This prevents unnecessary overhead caused by multiple, simultaneous migrations for noncritical network outages. However, a virtual machine with a vNIC configured for an optional VM network will not start on a host that does not have that network available.

RHV Logical Layers Logic Network Configuration:
- Data Center Layer
Logical networks are defined at the data center level. Each data center has the ovirtmgmt management network by default. Additional logical networks are optional but recommended. VM network designation and a custom MTU are set at the data center level. A logical network defined for a data center must be added to the clusters that use the logical network.

- Cluster Layer
Logical networks are available from the data center, and added to clusters that will use them. Each cluster is connected to the management network by default. You can add any logical networks to a cluster if they are defined for the parent data center. When a required logical network is added to a cluster, it must be implemented on each cluster host. Optional logical networks can be added to hosts as needed.

- Host Layer
Virtual machine logical networks are connected to each host in a cluster and implemented as a Linux bridge device associated with a physical network interface. Infrastructure networks do not implement Linux bridges but are directly associated with host physical NICs. When first added to a cluster, each host has a management network automatically implemented as a bridge on one of its NICs. All required networks in a cluster must be associated with a NIC on each cluster host to become operational for the cluster.

- Virtual Machine Layer
Logical networks that are available for a host are available to attach to the host's VM NICs.

Logical Network Creation:
- Documentation says: Administration Portal --> Compute (left menu) Networks. But there is nothing there.
- Administration Portal --> Network (left menu) --> Network (left menu tab) --> click New (button)
- In New Logical Network window:
- - General (tab):
- - - Data center: <select>
- - - Name: <unique name based on your organization naming convention>
- - - Description: <enter useful description as desired>
- - - Network Label: <unique label>
- - - VLAN Tagging: check checkbox unless network is flat (no VLANs)
- - - VM Network: uncheck if not used as VM Network
- - - MTU: 1500 unless storage network and 9000 (jumbo frames) is required
- - Cluster (tab):
- - - Clusters: check/uncheck checkboxes of clusters whose hosts will attach/use this network. Uncheck the Required checkbox if the network is not required and is optional.
- - Click OK

To segment the type of network traffic (VM, Management, Storage, Display, Migration, Gluster, and Default route), perform at the cluster level:
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> click cluster on table to select and open --> Logical Networks (tab) --> highlight/select network --> Manage Networks (button)

Adding Logical Networks to RHV-H Hosts:
- After addition to the cluster(s), new logical networks are attached to all hosts is those cluster(s) in a state: Non Operational.
- To become Operational, the logical network must be attached to a physical NIC on every host in a cluster.

To attach Logical Network to RHV-H:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select and open host --> Network Interfaces (tab) --> Setup Host Networks (button)
- In the Setup Host <hostname> Networks window, under Networks (depressed button), click and drag the logical network listed under the Unassigned Logical Networks to the grey dotted "no network assigned" box under "Assigned Logical Networks".
- Still in the Setup Host <hostname> Networks window, click the pencil icon to edit the network configuration mapping in the Edit Management Network window and set its network parameters:
- - IPv4 (tab)
- - - Boot: Static
- - - IP: <enter ip>
- - - Netmask/Routing Prefix: 255.255.255.0
- - - Gateway: <leave empty unless this interface is routing>
- - DNS (tab)
- - - Update as needed.
- - Click OK
- Still in the Setup Host <hostname> Networks window, under Labels (depressed button), click and drag label from right to left, as was done for the Logical Network itself.
- Verify connectivity between Host and Engine: <selected>
- Save network configuration: <selected>
- Click OK
Note: If the network config is not honored, force a sync from the RHV-M and the RHV-H, click Sync All Networks (button).

External Network Providers:
- Extended/utilized through RHV-M
- External provider must use OpenStack Neutron REST API/Red Hat OpenStack Platform OpenStack (RHOSP) Networking service
- Provides an API for SDN capabilities including dynamic creation and management of switches, routers, firewalls, and external connections to physical networks
- Neutron plugins include: Cisco virtual and physical switches, NEC OpenFlow, Open vSwitch/Open Virtual Networking, Linux bridging, VMware NSX, MidoNet, OpenContrail, Open Daylight, Brocade, Juniper, etc.
- RHV 4.3 supports RHOSP versions: 10, 13, 14 with original Open vSwitch driver
- RHV 4.3 supports RHOSP versions: 13 + with Open Daylight drivers

SDN Overview:
- Software-defined networking is more than deploying virtual networking components in a virtualization or cloud environment.
- The SDN controller is the control plane component that manages network devices in the data (forwarding) plane. These network devices, such as switches, routers, and firewalls, are programmatically configured for network routes, security, subnets and bandwidth in cooperation with the cloud-native application requiring dynamic services and allocation. An SDN controller centralizes the network global view, and presents the perception of a massively scalable, logical network switch to those applications.
- Open vSwitch (OVS) can plug and unplugs port, create networks or subnets, and provide IP addressing. An Open vSwitch bridge allocates virtual ports to instances, and can span across to the physical network for incoming and outgoing traffic. Implementation is provided by OpenFlow (OF), which defines the communication protocol that enables the SDN Controller to act as the middle manager with both physical and virtual networking components, passing information to switches and routers below, and the applications and business logic above.

Enhanced Open Virtual Networking (OVN) Features:
- Enhances the OVS significantly to add native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups.
- Provides a native overlay networking solution, by shifting networking away from being handled only by Linux host networking.
- Some high level features of OVN include:
- - Provides virtual networking abstraction for OVS, implemented using L2 and L3 overlays, but can also manage connectivity to physical networks.
- - Supports flexible security policies implemented using flows that use OVS connection tracking.
- - Native support for distributed L3 routing using OVS flows, with support for both IPv4 and IPv6.
- - Native support for NAT and load balancing using OVS connection tracking.
- - Native fully distributed support for DHCP.
- - Works with any OVS datapath (such as the default Linux kernel datapath, DPDK, or Hyper-V) that support Geneve tunnels and OVS connection tracking.
- - Supports L3 gateways from logical to physical networks.
- - Supports software-based L2 gateways.
- - Can provide networking for both VMs and containers running inside of those VMs, without a second layer of overlay networking.
- Neutron Network Limitations with RHV:
- - For use as VM networks, cannot be used as Display networks
- - Always non-Required network type
- - Cannot be edited in RHV-M
- - Port mirroring is not available for vNICs connected to external provider logical networks
- - Skydive cannot support reviewing applications and reports on running processes

Integration Steps:
- Choose integration method:
- - RHOSP: Red Hat OpenStack Platform director Networker role to a node which is added by RHV-M to a RHV-H (recommended by RH).
- - Manual installation of Neutron agents:
- - - Register each host to following repos: rhel7-server-rpms, rhel7-server-rhv-4-mgmt-agent-rpms, and rhel-7-server-ansible-2-rpms
- - - Install OS Networking "hook", which VDSM invokes the plugin (e.g. OVS) that passes networks to libvirt, and allow ICMP traffic into the hosts:
- - - - # yum update && install vdsm-hook-openstacknet
- - - - # iptables -D input -j REJECT --reject-with icmp-host-prohibited
- Register Neutron network with RHV-M
- RHV-M automatically discovers networks and presents as Logical Networks for RHV-H assignments, and VM assignments.


Managing RHV Storage:
Storage Domain Overview:
- A collection of images with a common storage interface. A storage domain contains images of templates, virtual machines, snapshots, and ISO files.
- Is made of either block devices (iSCSI, FC) or a file system (NFS, GlusterFS).
- - On file system backed storage, all virtual disks, templates, and snapshots are files.
- - On block device backed storage, each virtual disk, template, or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks.
- Virtual disks use either the QCOW2 or raw format. Storage can be either sparse or preallocated. Snapshots are always sparse, but can be taken for disks of either format.
- Virtual machines that share the same storage domain can migrate between hosts in the same cluster.
- One host in the datacenter is the Storage Pool Manager, regardless of cluster; the rest of the hosts read configuration, the SPM host can read and write it. All hosts can read/write to (to the stored images in) the storage domain.

Storage Types Back Ends:
- NFS
Red Hat Virtualization (RHV) supports NFS exports to create a storage domain. NFS exports are easy to manage, and work seamlessly with RHV. RHV recognizes NFS export resizing immediately, without requiring any additional manual intervention. NFS is supported for data, ISO, and export storage domains. When enterprise NFS is deployed over 10GbE, segregated with VLANs, and individual services are configured to use specific ports, it is both fast and secure.

- iSCSI
An iSCSI-based storage domain enables RHV to use existing Ethernet networks to connect to the block devices presented as LUNs in an iSCSI target. iSCSI-based storage is only supported for data domains. In a production environment, also consider booting hosts from the enterprise grade iSCSI server Enterprise iSCSI servers have native cloning features for easy deployment of new hosts using host templates. For optimum performance, hosts should use hardware-based iSCSI initiators and deploy over 10 GbE or faster networks.

- GlusterFS
RHV supports the native GlusterFS driver for creating a Red Hat Gluster Storage backed data storage domain. Three or more servers are configured as a Red Hat Gluster Storage server cluster, instead of using a SAN array. Red Hat Storage should be used over 10GbE and segregated with VLANs. Red Hat Gluster Storage is only supported for data domains.

- FC SAN
RHV also supports fast and secure Fibre Channel based SANs to create data storage domains. If you already have FC SAN in your environment, then you should take advantage of it for RHV. However, FC SANs require specialized network devices and skills to operate. Like iSCSI, a FC SAN also supports booting hosts directly from storage. FC SAN has the native cloning features to support easy deployment of new hosts using host templates.

- Local storage
Local storage should only be considered for small lab environments. Do not use local storage for production workloads. Local storage precludes the use of live migration, snapshots, and the flexibility that virtualization supports.

Storage Pool Manager Overview:
- The host that can make changes to the structure of the data domain is known as the Storage Pool Manager (SPM).
- The SPM coordinates all metadata changes in the data center, such as creating and deleting disk images, creating and merging snapshots, copying images between storage domains, creating templates, and storage allocation for block devices.
- There is one SPM for every data center. All other hosts can only read storage domain structural metadata.
RHV-M identifies the SPM based on the SPM assignment history for the host. If the host was the last SPM used, RHV-M selects the host as the SPM. If the selected SPM is unresponsive, RHV-M randomly selects another potential SPM. This selection process requires the host to assume and retain a storage-centric lease, which allows the host to modify storage metadata. Storage-centric leases are saved in the storage domain, rather than in the RHV-M or hosts.
- The SPM must be running on a data center host to add and configure storage domains. An administrator must register a host (hypervisor) before setting up a new data center. Once a host is part of the data center, it is possible to configure the data center storage domains.
- In an NFS data domain, the SPM creates a virtual machine disk as a file in the file system, either as a QCOW2 file for thin provisioning (sparse), or as a normal file for preallocated storage space (RAW).
- In an iSCSI or FC data domain, the SPM creates a volume group (VG) on the storage domain's LUN, and creates a virtual machine disk as a logical volume (LV) in that volume group. For a preallocated virtual disk, the SPM creates a logical volume of the specified size (in GB). For a thin provisioned virtual disk, the SPM initially creates a 512 MB logical volume. The host on which the virtual machine is running continuously monitors the logical volume. If the host determines that more space is needed, then the host notifies the SPM, and the SPM extends the logical volume by another 512 MB.
- From a performance standpoint, a preallocated (RAW) virtual disk is significantly faster than a thin provisioned (QCOW2) virtual disk. The recommended practice is to use thin provisioning for non-I/O intensive virtual desktops, and preallocation for virtual servers.

Storage Domain Types:
- Data Storage Domain, ISO Storage Domain, and Export Storage Domain
- - The latter two are both deprecated. Use the Data Storage Domain for ISOs and to transfer VMs from a cluster to another cluster.

NFS-backed Storage Domain Configuration:
- Create the NFS export, note the server URL/export path, the username and password, confirm permissions on the extent target is correct for the username
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Data Center: <default>
- - Name: nfs-storagedomain
- - Domain Function: Data
- - Description: Default datacenter storage domain
- - Storage Type: NFS
- - Host to Use: <select host>
- - Export Path: nfs-server1.mindwatering.net/exports/rhvstoreage1
- - Custom Connection Parameters:
- - - complete as needed
- - Click OK.

Note: The selected host does not necessarily mean it will be the first SPM, but the first host that can access the new storage.

iSCSI-backed Storage Domain Configuration:
- Create the iSCSI target and configure its LUNS for usage. Only one storage domain at a time can utilize each iSCSI LUN.
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Fill out like NFS above, except choose iSCSI, enter the IP and port for the target, click Discover
- - Afterwards, under Discover Targets, select the desired LUN by clicking the Add button.

Note: Previously used LUNs are hidden for re-use so not accidentally selected again. If we need to reuse a LUN, we need to clear it LUN ID, stop the multipathd.service on all hosts, and then start them all again.
(Clear with multipath -l, to get LUN IDs, and then dd if=/dev/zero of=/dev/mapper/<LUNID> bs=1M count=200 oflag=direct)

GlusterFS-backed Storage Domain Configuration:
- Install glusterfs-fuse and glusterfs packages on all RHV-H hosts.
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Type: GlusterFS

External Storage Providers:
Overview of OpenStack as External Provider:
- RHV can consume OpenStack Glance REST API and re-use storage from OpenStack's Glance (image) Cinder (block storage) service / RHOSP services
- OpenStack instance = VM
- OpenStack image = RHV template
- OpenStack Glance for Image Management:
- - The OpenStack image service provides a catalog of virtual machine images. In RHV, these images can be imported and used as floating disks, or attached to virtual machines and converted into templates. The Glance service is used as a storage domain that is not attached to any data center. Virtual disks in RHV can also be exported to Glance as virtual disks. Imported OpenStack images must be converted to templates to be used for deploying new virtual machines in RHV.
- - The authentication credentials set for the OpenStack image service external provider enables Red Hat Virtualization Manager to authenticate to the OpenStack identity service on behalf of the OpenStack image service.
- OpenStack Cinder for Storage Management:
- - The OpenStack block storage service provides persistent block storage management for virtual hard disks. Cinder volumes are provisioned by Ceph Storage. In RHV, you can create disks on Cinder storage that can be used as floating disks or attached to virtual machines. After you add Cinder to the RHV Manager, you can create a disk on the storage provided by Cinder.

Configure External Image (Glance) Provider in RHV-M:
- Get the URL, and the username and password from the OpenStack environment
- Administration Portal --> Administration (left menu) --> Providers (left menu tab) --> On Providers page, click Add (button)
- - Type: OpenStack Image
- - Provider URL: http://openstackserver.mindwatering.net:9292
- - Click Requires Authentication, enter the username and password.
- - Click OK

Configure External Block Storage (Cinder) Provider in RHV-M:
- Get the URL, and the username and password from the OpenStack environment
- Administration Portal --> Administration (left menu) --> Providers (left menu tab) --> On Providers page, click Add (button)
- - Type: OpenStack Block Storage
- - Provider URL: http://openstackserver.mindwatering.net:8776
- - Click Requires Authentication, enter the username and password.
- - Protocol: HTTP/HTTPS
- - Host Name (Keystone ID server): openstackidentity.mindwatering.net
- - API Port (Keystone ID server): 35357
- - Tenant Name: <services_tenant_name>
- - Click OK


Deploying VMs:
Overview:
- VMs are also called "guests"
- vSphere VM Tools = Guest Agents
- The hypervisor cannot run OS guests that are not compatible with the architecture. For example, Mac M2 VM cannot run if migrated from a M2 Mac because the hosts are Intel (or AMD) architecture.
- Not supported:
- - Linux: SSO in gnome environments, agents other than the qemu-guest-agent, virt-sysprep (template sealing), virt-sparsify, v2v of RHEL 8 is not supported as of RHV 4.3.
- - Windows: Windows 11 is not yet supported in RHV 4.3 initial release, but is supported now (2025/09) if host OS is RHEL 8.6 or higher, Windows server higher than 2019 is not yet supported at RHV 4.3 release, but Server 2022 is supported now if host OS is RHEL 8.6 or higher. Support matrix is maintained on article: 973163

Instance Types - VM Sizing Presets;
- Tiny: 1 vCPU, 512 MB RAM
- Small: 1 vCPU, 2048 MB RAM
- Medium: 2 vCPU, 4096 MB RAM
- Large: 2 vCPU, 8192 MB RAM
- XLarge: 4 vCPU, 16384 MB RAM

Templates:
- Blank - empty/none, list will include any template images added to the Data Storage Domain for the Data center

Operating System:
- Presets by OS, so that OS selection will select virtualized devices (motherboard/BIOS, disk interfaces, etc.) most compatible with that OS release.

Optimize For:
- Presets for advanced settings for persistence and configuration. Select Server for most VMs.

Instance Images:
- Used to create/configure the VM storage. (This is NOT template images. That was Template above.)
- Click Create to create a new disk
- - Interface specifies the hardware interface:
- - - VirtIO-SCSI and VirtIO are faster and require the underlying host/hypervisor have paravirtualization guest drivers installed for the VM OS selection above. RHEL OS does.
- - - IDE emulates basic IDE long supported by most OS selections
- - Allocation Policy: Preallocated (Thick - whole disk is created and consumed on storage), or Thin Provision (sparse)
- - - Preallocation is faster from a performance standpoint, but takes up more space
- - - If the storage includes deduplication, choose Preallocated on the VM/RHV-side, and Thin on the back-end storage-side.
- - - If desired, the app-data disk can be separate from the VM "local" disk, allows VM template to contain the current version of the VM, and the data to be mounted and backed up separately. The management of these separately can be more troublesome to keep track as machines are destroyed, and necessitate automation with logic on if app-data should be deleted or not if VM is deleted.
- - Instantiate VM network Interface selects vNIC (profile), click the dropdown to select the desired Logical Network. Click the + to add another vNIC.
- - Show Advanced Options
- - - Predefined Boot Sequence: Allows changing Boot order (move CD/DVDROM above disk, and move PXE Network Boot below Disk) and other advanced options.
- - - Attach CD: Also allows connecting an ISO for the OS installation to the new VM

During creation, or after creation:
- Before first boot, you generally want to attach ISO to the virtual CD/DVDROM of the VM. Run the Boot Once and select the ISO image desired. The CDROM will stay connected across reboots, but will disconnect automatically after Shutdown.

Guest Agents:
- Provides RHV-M information such as IP(s), and allows host memory management, resource usage data, and other performance and compatibility features, and allows RHV-M to gracefully shutdown instead of Powering Off a VM.
- The qemu-guest-agent service, provided by the qemu-guest-agent package, is installed by default on RHEL guests and provides the communication w/the RHV-M.
- The table below describes the different Linux guest drivers available for Red Hat Virtualization guests. Not all drivers are available for all supported operating systems. Some Guest OS installations can detect the hypervisor and choose install these drivers automatically (e.g. Ubuntu Linux).
Driver Description
virtio-net Paravirtualized network driver for enhancing performance of network interfaces.
virtio-block Paravirtualized HDD driver for increased I/O performance. Optimizes communication and coordination between the guest and hypervisor.
virtio-scsi Paravirtualized iSCSI HDD driver provides support for adding hundreds of devices, and uses the standard SCSI device naming scheme.
virtio-serial Provides support for multiple serial ports to improve performance, for faster communication between the guest and host.
virtio-balloon Controls the amount of memory a guest can actually access. Optimizes memory overcommitment.
qxl This paravirtualized display driver reduces CPU usage on the host and provides better performance.

- On Windows, install the RHV Agent as part of the RHEV-Tools installation. Below are the available guest agents and tools:
Name Description
ovirt-guest-agent-common Allows Red Hat Virtualization Manager to execute specific commands, and to receive guest internal events or information.
spice-agent Supports multiple monitors, and reduces the bandwidth usage over wide area networks. It also enables cut and paste operations for text and images between the guest and client.
rhev-sso Desktop agent that enables users to automatically log in to their virtual machines.

Installing the Guest Agents on Red Hat Enterprise Linux
- On Red Hat Enterprise Linux virtual machines, the Red Hat Virtualization guest agents are installed using the qemu-guest-agent package. This package is installed by default, even in a minimal installation of Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux 8.
- Virtualization drivers are also installed by default in RHEL.

Viewing VM Usage Data:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select and open VM
- - The virtual machines page displays the IP addresses and FQDN for the virtual machine.
- - Additional information collected by the agent can be seen by clicking on the name of the virtual machine, and then exploring tabs such as General, Network Interfaces, and Guest Info.

Cloning a VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM
- - Click Shutdown (button at top of view)
- - Click Create Snapshot (button at top of view)
- The clone contains all the configuration information do not run it at the same time as the original machine.
- Alternately, create a "Sealed Template" to create a copy that is cleared of unique data, and create machines from the template.

Editing a VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM
- - Click Edit (button at top of view)
- Changes that can only be preformed while VM is shutdown:
- - Reducing Memory Size, Reducing or increasing Maximum Memory, and reducing Physical Memory Guaranteed
- - Unplug of vCPUs can only be done if hot-plugged and if OS supports either adding or unplugging.
- - Unplugging vNICs or disk images should only be done after the OS is no longer using them, and their config is removed from the OS. If you remove a disk that is the boot/system disk, will cause the VM to be non-bootable.

Creating a Template:
- Create Clone of shutdown VM or create a New VM and install the OS, and then shutdown the OS
- Boot the VM and run the seal/clean utility on the VM. The last part of the seal is the shutdown of the VM. Do not boot the VM again or the sealing is undone, and it has to be done again. Perform the Make Template next.
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM previously cloned
- - Click 3 dots (right of buttons at top of view) --> choose Make Template, if Linux, click the "Seal Template (Linux only)" option to seal w/out manually performing the seal.

Sealing the cloned VM Detailed Instructions:
- Once the virtual machine has been configured, remove all information unique to the virtual machine. This is referred to as sealing the image.
- - Unique information includes hardware information specific to the virtual machine, such as MAC addresses; unique system configurations, such as the host name and static IP addresses; and possibly logs and other data.
- - Depending on the operating system of the virtual machine, you may need to perform these steps manually, or there may be tools available to seal the image for you.
- - - Linux virtual machines typically use the virt-sysprep utility externally on a stopped virtual machine.
- - - Running Windows virtual machines typically use sysprep.exe to initiate a System Out-of-Box-Experience (OOBE). The process for sealing a Windows virtual machine concludes when you shut down the virtual machine.
- - - The RHV-M Administration Portal also provides an option where it will seal a Linux virtual machine image prior to creating a template in the Make Template dialog/page. Be aware that this option may not work for all variants of Linux.
- - Items that were stripped out of the virtual machine during the sealing process are recreated when the virtual machine boots up for the first time. As such, once a virtual machine has been sealed, do not start it until after you have made a template. If you accidentally start the virtual machine before you create the template, you will have to go through the process of sealing the image again.

Creating New VMs from Templates with cloud-init:
- Used to automate the initial setup of virtual machines, such as configuring the host name, network interfaces, and authorized keys.
- Used to avoid conflicts on the network when provisioning virtual machines that have been deployed based on a template.
- Must be installed on the VM before using to create VMs.
- - The cloud-init package must first be installed on the virtual machine. Once installed, the cloud-init service starts during the boot process to search for configuration instructions.
- - - On a Red Hat Enterprise Linux 8 system, the cloud-init package is available in the rhel-8-for-x86_64-appstream-rpms repository.
- - - On a Red Hat Enterprise Linux 7 system, the cloud-init package is available in the rhel-7-server-rpms repository.
- Use the options in the Run Once window to provide instructions for the immediate boot process. You can persistently configure cloud-init to run every time the virtual machine boots by editing the virtual machine, and making changes to the Initial Run tab in the advanced options view. The same changes can be made to a template, so that any virtual machine created from the template will always run cloud-init at boot.
- Use case scenarios:
- - Customizing virtual machines using a standard template: Use the Use Cloud-Init/Sysprep options in the Initial Run tab of the New Template and Edit Template windows to specify options for customizing virtual machines that are based on this template.
- - Customizing virtual machines using "Initial Run": Administrators can use the cloud-init options in the Initial Run section of the Run Once window to initialize a virtual machine. This could be used to override settings set by a template.

Preparing the Template:
- As soon as the cloud-init package is installed on your Linux virtual machine, you can use that machine to create a template with cloud-init enabled.
- Configure the template so that the advanced option Initial Run has the setting Use Cloud-Init/Sysprep selected. This enables additional configuration options for setting host name, time zone, authentication and network properties, and for running customer cloud-init scripts. This is roughly equivalent to vSphere Policy VM Customization Specs.
- If the cloud-init settings are in the template, then when you create a new virtual machine, those Initial Run settings are applied to the virtual machine by default. You also have the option of overriding those settings from the New Virtual Machine window when you create the VM from the template.
- There are two easy ways to apply Initial Run settings to the template:
- - The template inherits any settings from the Initial Run configuration of the original virtual machine, just like it inherits other characteristics of the virtual machine. However, this means you have to change the base virtual machine's settings and then create the template.
- - You can create the template normally, and then use Edit Template to change the Initial Run settings for cloud-init. The original virtual machine will not have these settings applied, but machines created from the template will.

Custom Script Partial Examples from RH notes to add a user and create a file (.vimrc) for that user during VM provisioning from the template image.
users:
- name: developer2
passwd: $6$l.uq5YSZ/aebb.SN$S/KjOZQFn.3bZcmlgBRGF7fIEefBPCHD.k46IW0dKx/XK.I0DmZQBKGgCIxg7mykIIzzmW02JyZwXgORfHWBE.
lock_passwd: false

write_files:
- path: /home/developer2/.vimrc
content: |
set ai et ts=2 sts=2 sw=2
owner: developer2:developer2
mode: '0664'

Full reference example: cloudinit.readthedocs.io/en/latest/reference/examples.html






previous page

×