Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit d2889e2

Browse files
committedFeb 1, 2021
docs: document vdpa device type
Signed-off-by: Adrian Moreno <[email protected]>
1 parent ac3b7a1 commit d2889e2

File tree

4 files changed

+82
-1
lines changed

4 files changed

+82
-1
lines changed
 

‎README.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ For example manifest objects refer to [SR-IOV demo](https://github.com/nokia/dan
166166

167167
### Config parameters
168168

169-
This plugin creates device plugin endpoints based on the configurations given in the config map associated with the SR-IOV device plugin. In json format this file appears as shown below:
169+
This plugin creates device plugin endpoints based on the configurations given in the config map associated with the SR-IOV device plugin. In json format this file appears as shown below:
170170

171171
```json
172172
{
@@ -206,6 +206,15 @@ This plugin creates device plugin endpoints based on the configurations given in
206206
"isRdma": true
207207
}
208208
},
209+
{
210+
"resourceName": "ct6dx_vdpa_vhost",
211+
"selectors": {
212+
"vendors": ["15b3"],
213+
"devices": ["101e"],
214+
"drivers": ["mlx5_core"],
215+
"vdpaType": "vhost"
216+
}
217+
},
209218
{
210219
"resourceName": "intel_fpga",
211220
"deviceType": "accelerator",
@@ -258,6 +267,7 @@ This selector is applicable when "deviceType" is "netDevice"(note: this is defau
258267
| "ddpProfiles" | N | A map of device selectors | `string` list Default: `null` | "ddpProfiles": ["GTPv1-C/U IPv4/IPv6 payload"] |
259268
| "isRdma" | N | Mount RDMA resources | `bool` values `true` or `false` Default: `false` | "isRdma": `true` |
260269
| "needVhostNet"| N | Share /dev/vhost-net | `bool` values `true` or `false` Default: `false` | "needVhostNet": `true` |
270+
| "vdpaType" | N | The type of vDPA device (virtio, vhost or `nil`) | `string` values `vhost` or `virtio` Default: `null` | "vdpaType": "vhost" |
261271

262272

263273
[//]: # (The tables above generated using: https://ozh.github.io/ascii-tables/)

‎docs/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,4 @@ This page contains supplimentary documention that users may find useful for vari
77
* [Running RDMA application in Kubernetes](rdma/)
88
* [SR-IOV network device plugin with DDP](ddp/)
99
* [Using node specific config file for running device plugin DaemonSet](config-file)
10+
* [Using vDPA devices in Kuberenets](vdpa/)

‎docs/vdpa/README.md

+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Using vDPA devices in Kubernetes
2+
## Introduction to vDPA
3+
vDPA (Virtio DataPath Acceleration) is a technology that enables the acceleration of virtIO devices while allowing the implementations of such devices (e.g: NIC vendors) to use their own control plane.
4+
The consumers of the virtIO devices (VMs or containers) interact with the devices using the standard virtIO datapath and virtio-compatible control paths (virtIO, vhost). While the data-plane is mapped directly to the accelerator device, the contol-plane gets translated the vDPA kernel framework.
5+
6+
The vDPA kernel framework is composed of a vdpa bus (/sys/bus/vdpa), vdpa devices (/sys/bus/vdpa/devices) and vdpa drivers (/sys/bus/vdpa/drivers). Currently, two vdpa drivers are implemented:
7+
* virtio_vdpa: Exposes the device as a virtio-net netdev
8+
* vhost_vdpa: Exposes the device as a vhost-vdpa device. This device uses an extension of the vhost-net protocol to allow userspace applications access the rings directly
9+
10+
For more information about the vDPA framework, read the article on [LWN.net](https://lwn.net/Articles/816063/) or the blog series written by one of the main authors ([part 1](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-1-vdpa-bus-abstracting-hardware), [part 2](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-2-vdpa-bus-drivers-kernel-subsystem-interactions), [part3](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-3-usage-vms-and-containers))
11+
12+
## vDPA Management
13+
Currently, the management of vDPA devices is performed using the sysfs interface exposed by the vDPA Framework. However, in order to decouple the management of vdpa devices from the SR-IOV Device Plugin functionality, this management is low-level management is done in an external library called [go-vdpa](https://github.com/redhat-virtio-net/govdpa).
14+
15+
At the time of this writing (Jan 2021), there is work being done to provide a [unified management tool for vDPA devices] (https://lists.linuxfoundation.org/pipermail/virtualization/2020-November/050623.html). This tool will provide many *additional* features such as support for SubFunctions and 1:N mapping between VFs and vDPA devices.
16+
17+
In the context of the SR-IOV Device Plugin and the SR-IOV CNI, the current plan is to support only 1:1 mappings between SR-IOV VFs and vDPA devices. The adoption of the unified management interface might be considered while keeping this limitation.
18+
19+
## Tested NICs:
20+
* Mellanox ConnectX®-6 DX *
21+
22+
\* NVIDIA Mellanox official support for vDPA devices [is limited to SwitchDev mode](https://docs.mellanox.com/pages/viewpage.action?pageId=39285091#OVSOffloadUsingASAP%C2%B2Direct-hwvdpaVirtIOAccelerationthroughHardwarevDPA), which is out of the scope of the SR-IOV Network Device Plugin
23+
24+
## Tested Kernel versions:
25+
* 5.10.0
26+
27+
## vDPA device creation
28+
Currently, each NIC might requires different steps to create vDPA devices on top of the VFs. The unified management tool mentioned above will help unify this. The creation of vDPA devices in the vDPA bus is out of the scope of this project.
29+
30+
## Bind the desired vdpa driver
31+
The vdpa bus works similar to the pci bus. To unbind a driver from a device, run:
32+
33+
echo ${DEV_NAME} > /sys/bus/vdpa/devices/${DEV_NAME}/driver/unbind
34+
35+
To bind a driver to a device, run:
36+
37+
echo ${DEV_NAME} > /sys/bus/vdpa/drivers/${DRIVER_NAME}/bind
38+
39+
## Priviledges
40+
IPC_LOCK capability privilege is required for "vhost" mode to be used in a Kubernetes Pod.

‎docs/vdpa/configMap.yaml

+30
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
apiVersion: v1
2+
kind: ConfigMap
3+
metadata:
4+
name: sriovdp-config
5+
namespace: kube-system
6+
data:
7+
config.json: |
8+
{
9+
"resourceList": [{
10+
{
11+
"resourceName": "vdpa_mlx_virtio",
12+
"selectors": {
13+
"vendors": ["15b3"],
14+
"devices": ["101e"],
15+
"drivers": ["mlx5_core"],
16+
"vdpaType": "virtio"
17+
}
18+
},
19+
{
20+
"resourceName": "vdpa_mlx_vhost",
21+
"selectors": {
22+
"vendors": ["15b3"],
23+
"devices": ["101e"],
24+
"drivers": ["mlx5_core"],
25+
"vdpaType": "vhost"
26+
}
27+
}
28+
]
29+
}
30+

0 commit comments

Comments
 (0)
Please sign in to comment.