|
| 1 | +# Using vDPA devices in Kubernetes |
| 2 | +## Introduction to vDPA |
| 3 | +vDPA (Virtio DataPath Acceleration) is a technology that enables the acceleration |
| 4 | +of virtIO devices while allowing the implementations of such devices |
| 5 | +(e.g: NIC vendors) to use their own control plane. |
| 6 | + |
| 7 | +The consumers of the virtIO devices (VMs or containers) interact with the devices |
| 8 | +using the standard virtIO datapath and virtio-compatible control paths (virtIO, vhost). |
| 9 | +While the data-plane is mapped directly to the accelerator device, the contol-plane |
| 10 | +gets translated the vDPA kernel framework. |
| 11 | + |
| 12 | +The vDPA kernel framework is composed of a vdpa bus (/sys/bus/vdpa), vdpa devices |
| 13 | +(/sys/bus/vdpa/devices) and vdpa drivers (/sys/bus/vdpa/drivers). |
| 14 | +Currently, two vdpa drivers are implemented: |
| 15 | +* virtio_vdpa: Exposes the device as a virtio-net netdev |
| 16 | +* vhost_vdpa: Exposes the device as a vhost-vdpa device. This device uses an extension |
| 17 | +of the vhost-net protocol to allow userspace applications access the rings directly |
| 18 | + |
| 19 | +For more information about the vDPA framework, read the article on |
| 20 | +[LWN.net](https://lwn.net/Articles/816063/) or the blog series written by one of the |
| 21 | +main authors ([part 1](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-1-vdpa-bus-abstracting-hardware), |
| 22 | +[part 2](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-2-vdpa-bus-drivers-kernel-subsystem-interactions), |
| 23 | +[part3](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-3-usage-vms-and-containers)) |
| 24 | + |
| 25 | +## vDPA Management |
| 26 | +Currently, the management of vDPA devices is performed using the sysfs interface exposed |
| 27 | +by the vDPA Framework. However, in order to decouple the management of vdpa devices from |
| 28 | +the SR-IOV Device Plugin functionality, this low-level management is done in an external |
| 29 | +library called [go-vdpa](https://github.com/redhat-virtio-net/govdpa). |
| 30 | + |
| 31 | +In the context of the SR-IOV Device Plugin and the SR-IOV CNI, the current plan is to |
| 32 | +support only 1:1 mappings between SR-IOV VFs and vDPA devices despite the fact that |
| 33 | +the vDPA Framework might support 1:N mappings. |
| 34 | + |
| 35 | +## Tested NICs: |
| 36 | +* Mellanox ConnectX®-6 DX * |
| 37 | + |
| 38 | +## Prerequisites |
| 39 | +* Linux Kernel >= 5.12 |
| 40 | +* iproute >= 5.14 |
| 41 | + |
| 42 | +## vDPA device creation |
| 43 | +Insert the vdpa kernel modules if not present: |
| 44 | + |
| 45 | + $ modprobe vdpa |
| 46 | + $ modprobe virtio-vdpa |
| 47 | + $ modprobe vhost-vdpa |
| 48 | + |
| 49 | +Create vdpa using the vdpa management tool integrated into iproute2, e.g: |
| 50 | + |
| 51 | + $ vdpa mgmtdev show |
| 52 | + pci/0000:65:00.2: |
| 53 | + supported_classes net |
| 54 | + $ vdpa dev add name vdpa2 mgmtdev pci/0000:65:00.2 |
| 55 | + $ vdpa dev list |
| 56 | + vdpa2: type network mgmtdev pci/0000:65:00.2 vendor_id 5555 max_vqs 16 max_vq_size 256 |
| 57 | + |
| 58 | +## Bind the desired vdpa driver |
| 59 | +The vdpa bus works similar to the pci bus. To unbind a driver from a device, run: |
| 60 | + |
| 61 | + echo ${DEV_NAME} > /sys/bus/vdpa/devices/${DEV_NAME}/driver/unbind |
| 62 | + |
| 63 | +To bind a driver to a device, run: |
| 64 | + |
| 65 | + echo ${DEV_NAME} > /sys/bus/vdpa/drivers/${DRIVER_NAME}/bind |
| 66 | + |
| 67 | +## Configure the SR-IOV Device Plugin |
| 68 | +See the sample [configMap](configMap.yaml) for an example of how to configure a vDPA device |
0 commit comments