Skip to content

storpool/storpool-ovirt-integration

Repository files navigation

oVirt StorPool Adapter

This package provides StorPool integration for oVirt in the form of a Managed Storage Domain adapter.

Requirements

The StorPool adapter works through a Managed Storage Dispatch mechanism that is proposed by these pull requests:

Additionally it requires the changes from the following pull request that add support for the StorPool storage driver type:

Current production versions of oVirt and Vdsm can be patched post-installation.

oVirt Engine

No StorPool installation is required on the Engine hosts. The StorPool adapter works through the StorPool API entirely.

This version of the integration requires a minimal /etc/storpool.conf that has the API host and authentication token only. In the future this requirement will be dropped in favor of passing the information through the driver_sensitive_options when creating the Managed Storage Domain.

The adapter requires access to the CinderLib database, where it will create its own set of tables to store the mapping between oVirt's disk UUIDs and StorPool's volume global IDs. The CinderLib database can be created at any point by running the oVirt engine-setup utility with the --reconfigure-optional-components option.

Vdsm

Each Vdsm hypervisor node needs at the least the StorPool block and client installed so it is able to attach StorPool volumes. Hypervisor nodes can also be hyperconverged (have StorPool server as well).

Additionally to the Managed Storage Dispatch mechanism, the StorPool adapter requires the Vdsm hook mechanism to be available.

The adapter uses the following hooks:

  • before_vm_start: used to implement safer VM start. This is done to prevent potential data corruption by making sure all other disk attachments are forcefully disconnected.

Installation

After installing the package on each Vdsm host, the following must be done (it will become part of the package post-install script in the future):

  • Add sanlock, vdsm, qemu and ovirtimg users to the disk group. This allows various non-superuser Vdsm and oVirt components to have read-write access to StorPool block devices.
for user in sanlock vdsm qemu ovirtimg; do
    usermod -a -G disk $user
done
  • Block StorPool devices in the multipath configuration.
cat > /etc/multipath/conf.d/storpool_blacklist.conf <<EOF
blacklist {
    devnode "sp-[0-9]+"
}
EOF

The provided package will install the required symlinks. Users are not required to manually configure them.

For review purposes these are the symlinks:

For oVirt engine:

  • /usr/share/ovirt-engine/cinderlib/storpool-adapter

For Vdsm:

  • /usr/libexec/vdsm/managedvolume-helper-storpool
  • /usr/libexec/vdsm/hooks/before_vm_start/99_storpool_hook

Storage Setup

StorPool storage domains are created using the Managed Storage Domain type, by setting the adapter field to storpool in the driver_options map. This can be done through the oVirt web admin interface or automated with Ansible as described below.

The adapter field is mandatory. It will instruct the CinderlibExecutor to redirect all storage operations for that storage domain to our vendor helper scripts.

Example using the ovirt.ovirt.ovirt_storage_domain Ansible module:

ovirt.ovirt.ovirt_storage_domain:
  auth: "{{ ovirt_auth }}"
  data_center: Default
  host: "{{ vdsm_host }}"
  name: storpool
  managed_block_storage:
    driver_options:
      - name: adapter
        value: storpool
  state: present

The vdsm_host is the name of an already registered Vdsm host in the oVirt cluster, for example one created by using the ovirt.ovirt.ovirt_host or manually through the oVirt web admin interface. The Vdsm host must also have the StorPool integration installed beforehand.

The ovirt_auth fact can be obtained by running the ovirt.ovirt.ovirt_auth module. For more information visit the oVirt Ansible collection documentation at [https://galaxy.ansible.com/ui/repo/published/ovirt/ovirt/docs/].

Storage Usage

The StorPool Managed Storage integration supports the following:

  • Creating disks directly in the StorPool Storage domain;

  • Copying disk from other Storage Domains (e.g. NFS) to the StorPool Storage Domain;

  • Attaching StorPool disks to a VM (during creation, while the VM is live or powered down);

  • Live migrating a VM from one Vdsm host to another, where both Vdsm hosts have the StorPool integration;

  • Creating VM snapshots;

  • Previewing VM snapshots;

  • Reverting a VM to a snapshot preview;

  • Deleting VM snapshots.

StorPool Tags

The StorPool integration will tag each volume (oVirt disk) and snapshot with the corresponding oVirt UUID using the ovirt-uuid tag.

About

StorPool oVirt Integration

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors