Skip to content

Commit c50be3a

Browse files
committed
Document deploying DRA to OpenShift
* Document the differences on OpenShift * Include useful setup scripts Signed-off-by: Vitaliy Emporopulo <[email protected]>
1 parent ac31d61 commit c50be3a

File tree

5 files changed

+202
-1
lines changed

5 files changed

+202
-1
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ A document and demo of the DRA support for GPUs provided by this repo can be fou
1212

1313
## Demo
1414

15-
This section describes using `kind` to demo the functionality of the NVIDIA GPU DRA Driver.
15+
This section describes using `kind` to demo the functionality of the NVIDIA GPU DRA Driver. For Red Hat OpenShift, refer to [running the NVIDIA DRA driver on OpenShift](demo/clusters/openshift/README.md).
1616

1717
First since we'll launch kind with GPU support, ensure that the following prerequisites are met:
1818
1. `kind` is installed. See the official documentation [here](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).

demo/clusters/openshift/README.md

+144
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
# Running the NVIDIA DRA Driver on Red Hat OpenShift
2+
3+
This document explains the differences between deploying the NVIDIA DRA driver on OpenShift and upstream Kubernetes or its flavors.
4+
5+
## Prerequisites
6+
7+
Install a recent build of OpenShift 4.16 (e.g. 4.16.0-ec.4). You can use the Assisted Installer to install on bare metal, or obtain an IPI installer binary (`openshift-install`) from the [Release Status](https://amd64.ocp.releases.ci.openshift.org/) page. Note that a development version of OpenShift requires access to [an internal CI registry](https://docs.ci.openshift.org/docs/how-tos/use-registries-in-build-farm/) in the pull secret. Refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.15/installing/index.html) for different installation methods.
8+
9+
## Enabling DRA on OpenShift
10+
11+
Enable the `TechPreviewNoUpgrade` feature set as explained in [Enabling features using FeatureGates](https://docs.openshift.com/container-platform/4.15/nodes/clusters/nodes-cluster-enabling-features.html), either during the installation or post-install. The feature set includes the `DynamicResourceAllocation` feature gate.
12+
13+
Update the cluster scheduler to enable the DRA scheduling plugin:
14+
15+
```console
16+
$ oc patch --type merge -p '{"spec":{"profile": "HighNodeUtilization", "profileCustomizations": {"dynamicResourceAllocation": "Enabled"}}}' scheduler cluster
17+
```
18+
19+
## NVIDIA GPU Drivers
20+
21+
The easiest way to install NVIDIA GPU drivers on OpenShift nodes is via the NVIDIA GPU Operator.
22+
23+
**Be careful to disable the device plugin so it does not conflict with the DRA plugin**:
24+
25+
```yaml
26+
devicePlugin:
27+
enabled: false
28+
```
29+
30+
Keep in mind that the NVIDIA GPU operator is needed here only to install NVIDIA binaries on the cluster nodes, and should not be used for other purposes such as configuring GPUs.
31+
32+
The operator might not be available through the OperatorHub in a pre-production version of OpenShift. In this case, deploy the operator from a bundle or add a certified catalog index from an earlier version of OpenShift, e.g.:
33+
34+
```yaml
35+
kind: CatalogSource
36+
apiVersion: operators.coreos.com/v1alpha1
37+
metadata:
38+
name: certified-operators-v415
39+
namespace: openshift-marketplace
40+
spec:
41+
displayName: Certified Operators v4.15
42+
image: registry.redhat.io/redhat/certified-operator-index:v4.15
43+
priority: -100
44+
publisher: Red Hat
45+
sourceType: grpc
46+
updateStrategy:
47+
registryPoll:
48+
interval: 10m0s
49+
```
50+
51+
Then follow the installation steps in [NVIDIA GPU Operator on Red Hat OpenShift Container Platform](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html).
52+
53+
## NVIDIA Binaries on RHCOS
54+
55+
The location of some NVIDIA binaries on an OpenShift node differs from the defaults. Make sure to pass the following values when installing the Helm chart:
56+
57+
```yaml
58+
nvidiaDriverRoot: /run/nvidia/driver
59+
nvidiaCtkPath: /var/usrlocal/nvidia/toolkit/nvidia-ctk
60+
```
61+
62+
## OpenShift Security
63+
64+
OpenShift generally requires more stringent security settings than Kubernetes. If you see a warning about security context constraints when deploying the DRA plugin, pass the following to the Helm chart, either via an in-line variable or a values file:
65+
66+
```yaml
67+
kubeletPlugin:
68+
containers:
69+
plugin:
70+
securityContext:
71+
privileged: true
72+
seccompProfile:
73+
type: Unconfined
74+
```
75+
76+
If you see security context constraints errors/warnings when deploying a sample workload, make sure to update the workload's security settings according to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.15/operators/operator_sdk/osdk-complying-with-psa.html). Usually applying the following `securityContext` definition at a pod or container level works for non-privileged workloads.
77+
78+
```yaml
79+
securityContext:
80+
runAsNonRoot: true
81+
seccompProfile:
82+
type: RuntimeDefault
83+
allowPrivilegeEscalation: false
84+
capabilities:
85+
drop:
86+
- ALL
87+
```
88+
89+
If you see the following error when trying to deploy a workload:
90+
91+
```console
92+
Warning FailedScheduling 21m default-scheduler running Reserve plugin "DynamicResources": podschedulingcontexts.resource.k8s.io "gpu-example" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
93+
```
94+
95+
apply the following RBAC configuration (this should be fixed in newer OpenShift builds):
96+
97+
```yaml
98+
apiVersion: rbac.authorization.k8s.io/v1
99+
kind: ClusterRole
100+
metadata:
101+
name: system:kube-scheduler:podfinalizers
102+
rules:
103+
- apiGroups:
104+
- ""
105+
resources:
106+
- pods/finalizers
107+
verbs:
108+
- update
109+
---
110+
apiVersion: rbac.authorization.k8s.io/v1
111+
kind: ClusterRoleBinding
112+
metadata:
113+
name: system:kube-scheduler:podfinalizers:crbinding
114+
roleRef:
115+
apiGroup: rbac.authorization.k8s.io
116+
kind: ClusterRole
117+
name: system:kube-scheduler:podfinalizers
118+
subjects:
119+
- kind: User
120+
name: system:kube-scheduler
121+
```
122+
123+
## Using Multi-Instance GPU (MIG)
124+
125+
Workloads that use the Multi-instance GPU (MIG) feature require MIG to be [enabled](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html#enable-mig-mode) on the worker nodes with [MIG-supported GPUs](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html#supported-gpus), e.g. A100.
126+
127+
You can do it via the driver daemon set pod running on a GPU node as follows (here, the GPU ID is 0, i.e. `-i 0`):
128+
129+
```console
130+
oc exec -ti nvidia-driver-daemonset-416.94.202402160025-0-g45bd -n nvidia-gpu-operator -- nvidia-smi -i 0 -mig 1
131+
Enabled MIG Mode for GPU 00000000:0A:00.0
132+
All done.
133+
```
134+
135+
Make sure to stop everything that may hold the GPU before enabling MIG. For example, the DCGM and DCGM Exporter of the NVIDIA GPU Operator are likely to prevent the MIG setting from being applied. Disable them in the operator's cluster policy if you are planning on using MIG:
136+
137+
```console
138+
Warning: MIG mode is in pending enable state for GPU 00000001:00:00.0:In use by another client
139+
00000001:00:00.0 is currently being used by one or more other processes (e.g. CUDA application or a monitoring application such as another instance of nvidia-smi). Please first kill all processes using the device and retry the command or reboot the system to make MIG mode effective.
140+
```
141+
142+
If the MIG status is marked with an asterisk (i.e. `Enabled*`), it means that the setting could not be fully applied and you may need to reboot the node.
143+
144+
No MIG devices must be pre-configured on the GPU if it is going to be used with DRA &mdash; the DRA driver will configure MIG automatically on the fly.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
#!/usr/bin/env bash
2+
3+
set -ex
4+
set -o pipefail
5+
6+
oc create -f - <<EOF
7+
kind: CatalogSource
8+
apiVersion: operators.coreos.com/v1alpha1
9+
metadata:
10+
name: certified-operators-v415
11+
namespace: openshift-marketplace
12+
spec:
13+
displayName: Certified Operators v4.15
14+
image: registry.redhat.io/redhat/certified-operator-index:v4.15
15+
priority: -100
16+
publisher: Red Hat
17+
sourceType: grpc
18+
updateStrategy:
19+
registryPoll:
20+
interval: 10m0s
21+
EOF
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
#!/usr/bin/env bash
2+
3+
set -ex
4+
set -o pipefail
5+
6+
oc patch --type merge -p '{"spec":{"profile": "HighNodeUtilization", "profileCustomizations": {"dynamicResourceAllocation": "Enabled"}}}' scheduler cluster
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
#!/usr/bin/env bash
2+
3+
set -ex
4+
set -o pipefail
5+
6+
oc apply -f - <<EOF
7+
apiVersion: rbac.authorization.k8s.io/v1
8+
kind: ClusterRole
9+
metadata:
10+
name: system:kube-scheduler:podfinalizers
11+
rules:
12+
- apiGroups:
13+
- ""
14+
resources:
15+
- pods/finalizers
16+
verbs:
17+
- update
18+
---
19+
apiVersion: rbac.authorization.k8s.io/v1
20+
kind: ClusterRoleBinding
21+
metadata:
22+
name: system:kube-scheduler:podfinalizers:crbinding
23+
roleRef:
24+
apiGroup: rbac.authorization.k8s.io
25+
kind: ClusterRole
26+
name: system:kube-scheduler:podfinalizers
27+
subjects:
28+
- kind: User
29+
name: system:kube-scheduler
30+
EOF

0 commit comments

Comments
 (0)