Skip to content

Commit d0280c2

Browse files
committed
address comments
Signed-off-by: chipzoller <[email protected]>
1 parent 72a0227 commit d0280c2

File tree

1 file changed

+32
-33
lines changed

1 file changed

+32
-33
lines changed

README.md

Lines changed: 32 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automa
5050
- Run GPU enabled containers in your Kubernetes cluster.
5151

5252
This repository contains NVIDIA's official implementation of the [Kubernetes device plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
53-
As of v0.15.0 this repository also holds the implementation for GPU Feature Discovery labels,
53+
As of v0.16.1 this repository also holds the implementation for GPU Feature Discovery labels,
5454
for further information on GPU Feature Discovery see [here](docs/gpu-feature-discovery/README.md).
5555

5656
Please note that:
@@ -134,7 +134,7 @@ Once you have configured the options above on all the GPU nodes in your
134134
cluster, you can enable GPU support by deploying the following Daemonset:
135135

136136
```shell
137-
$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.16.2/deployments/static/nvidia-device-plugin.yml
137+
$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.16.1/deployments/static/nvidia-device-plugin.yml
138138
```
139139

140140
**Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -179,7 +179,7 @@ Done
179179
```
180180

181181
> [!WARNING]
182-
> If you don't request GPUs when using the device plugin with NVIDIA images, all the GPUs on the machine will be exposed inside your container.
182+
> If you do not request GPUs when you use the device plugin, the plugin exposes all the GPUs on the machine inside your container.
183183
184184
## Configuring the NVIDIA device plugin binary
185185

@@ -337,7 +337,7 @@ extended options in its configuration file. There are two flavors of sharing
337337
available: Time-Slicing and MPS.
338338

339339
> [!NOTE]
340-
> The use of time-slicing and MPS are mutually exclusive.
340+
> Time-slicing and MPS are mutually exclusive.
341341

342342
In the case of time-slicing, CUDA time-slicing is used to allow workloads sharing a GPU to
343343
interleave with each other. However, nothing special is done to isolate workloads that are
@@ -350,9 +350,8 @@ In contrast to time-slicing, MPS does space partitioning and allows memory and
350350
compute resources to be explicitly partitioned and enforces these limits per
351351
workload.
352352

353-
With both time-slicing and MPS the same sharing method is applied to all GPUs on
354-
a node. Sharing cannot be configured on a per-GPU basis but applies uniformly at
355-
the node level.
353+
With both time-slicing and MPS, the same sharing method is applied to all GPUs on
354+
a node. You cannot configure sharing on a per-GPU basis.
356355

357356
#### With CUDA Time-Slicing
358357

@@ -583,11 +582,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
583582
$ helm repo update
584583
```
585584

586-
Then verify that the latest release (`v0.16.2`) of the plugin is available:
585+
Then verify that the latest release (`v0.16.1`) of the plugin is available:
587586
```
588587
$ helm search repo nvdp --devel
589588
NAME CHART VERSION APP VERSION DESCRIPTION
590-
nvdp/nvidia-device-plugin 0.16.2 0.16.2 A Helm chart for ...
589+
nvdp/nvidia-device-plugin 0.16.1 0.16.1 A Helm chart for ...
591590
```
592591

593592
Once this repo is updated, you can begin installing packages from it to deploy
@@ -598,7 +597,7 @@ The most basic installation command without any options is then:
598597
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
599598
--namespace nvidia-device-plugin \
600599
--create-namespace \
601-
--version 0.16.2
600+
--version 0.16.1
602601
```
603602

604603
**Note:** You only need the to pass the `--devel` flag to `helm search repo`
@@ -607,7 +606,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
607606

608607
### Configuring the device plugin's `helm` chart
609608

610-
The `helm` chart for the latest release of the plugin (`v0.16.2`) includes
609+
The `helm` chart for the latest release of the plugin (`v0.16.1`) includes
611610
a number of customizable values.
612611

613612
Prior to `v0.12.0` the most commonly used values were those that had direct
@@ -617,7 +616,7 @@ case of the original values is then to override an option from the `ConfigMap`
617616
if desired. Both methods are discussed in more detail below.
618617

619618
The full set of values that can be set are found here:
620-
[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.2/deployments/helm/nvidia-device-plugin/values.yaml).
619+
[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.1/deployments/helm/nvidia-device-plugin/values.yaml).
621620

622621
#### Passing configuration to the plugin via a `ConfigMap`.
623622

@@ -657,7 +656,7 @@ EOF
657656
And deploy the device plugin via helm (pointing it at this config file and giving it a name):
658657
```
659658
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
660-
--version=0.16.2 \
659+
--version=0.16.1 \
661660
--namespace nvidia-device-plugin \
662661
--create-namespace \
663662
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -679,7 +678,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
679678
```
680679
```
681680
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
682-
--version=0.16.2 \
681+
--version=0.16.1 \
683682
--namespace nvidia-device-plugin \
684683
--create-namespace \
685684
--set config.name=nvidia-plugin-configs
@@ -708,7 +707,7 @@ EOF
708707
And redeploy the device plugin via helm (pointing it at both configs with a specified default).
709708
```
710709
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
711-
--version=0.16.2 \
710+
--version=0.16.1 \
712711
--namespace nvidia-device-plugin \
713712
--create-namespace \
714713
--set config.default=config0 \
@@ -727,7 +726,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
727726
```
728727
```
729728
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
730-
--version=0.16.2 \
729+
--version=0.16.1 \
731730
--namespace nvidia-device-plugin \
732731
--create-namespace \
733732
--set config.default=config0 \
@@ -811,7 +810,7 @@ runtimeClassName:
811810
```
812811

813812
Please take a look in the
814-
[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.2/deployments/helm/nvidia-device-plugin/values.yaml)
813+
[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.1/deployments/helm/nvidia-device-plugin/values.yaml)
815814
file to see the full set of overridable parameters for the device plugin.
816815

817816
Examples of setting these options include:
@@ -820,7 +819,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
820819
100ms of CPU time and a limit of 512MB of memory.
821820
```shell
822821
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
823-
--version=0.16.2 \
822+
--version=0.16.1 \
824823
--namespace nvidia-device-plugin \
825824
--create-namespace \
826825
--set compatWithCPUManager=true \
@@ -831,7 +830,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
831830
Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`
832831
```shell
833832
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
834-
--version=0.16.2 \
833+
--version=0.16.1 \
835834
--namespace nvidia-device-plugin \
836835
--create-namespace \
837836
--set compatWithCPUManager=true \
@@ -843,14 +842,14 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
843842
As of `v0.12.0`, the device plugin's helm chart has integrated support to
844843
deploy
845844
[`gpu-feature-discovery`](https://github.com/NVIDIA/gpu-feature-discovery)
846-
(GFD). One can use GFD to automatically generate labels for the
845+
(GFD). You can use GFD to automatically generate labels for the
847846
set of GPUs available on a node. Under the hood, it leverages Node Feature
848847
Discovery to perform this labeling.
849848

850849
To enable it, simply set `gfd.enabled=true` during helm install.
851850
```shell
852851
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
853-
--version=0.16.2 \
852+
--version=0.16.1 \
854853
--namespace nvidia-device-plugin \
855854
--create-namespace \
856855
--set gfd.enabled=true
@@ -895,7 +894,7 @@ nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED
895894
896895
#### Deploying gpu-feature-discovery in standalone mode
897896
898-
As of v0.16.2, the device plugin's helm chart has integrated support to deploy
897+
As of v0.16.1, the device plugin's helm chart has integrated support to deploy
899898
[`gpu-feature-discovery`](https://gitlab.com/nvidia/kubernetes/gpu-feature-discovery/-/tree/main)
900899
901900
When gpu-feature-discovery in deploying standalone, begin by setting up the
@@ -906,13 +905,13 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
906905
$ helm repo update
907906
```
908907

909-
Then verify that the latest release (`v0.16.2`) of the plugin is available
908+
Then verify that the latest release (`v0.16.1`) of the plugin is available
910909
(Note that this includes the GFD chart):
911910

912911
```shell
913912
$ helm search repo nvdp --devel
914913
NAME CHART VERSION APP VERSION DESCRIPTION
915-
nvdp/nvidia-device-plugin 0.16.2 0.16.2 A Helm chart for ...
914+
nvdp/nvidia-device-plugin 0.16.1 0.16.1 A Helm chart for ...
916915
```
917916

918917
Once this repo is updated, you can begin installing packages from it to deploy
@@ -922,7 +921,7 @@ The most basic installation command without any options is then:
922921

923922
```shell
924923
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
925-
--version 0.16.2 \
924+
--version 0.16.1 \
926925
--namespace gpu-feature-discovery \
927926
--create-namespace \
928927
--set devicePlugin.enabled=false
@@ -933,7 +932,7 @@ the default namespace.
933932

934933
```shell
935934
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
936-
--version=0.16.2 \
935+
--version=0.16.1 \
937936
--set allowDefaultNamespace=true \
938937
--set nfd.enabled=false \
939938
--set migStrategy=mixed \
@@ -956,39 +955,39 @@ Using the default values for the flags:
956955
$ helm upgrade -i nvdp \
957956
--namespace nvidia-device-plugin \
958957
--create-namespace \
959-
https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.16.2.tgz
958+
https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.16.1.tgz
960959
```
961960

962961
## Building and Running Locally
963962

964963
The next sections are focused on building the device plugin locally and running it.
965964
It is intended purely for development and testing, and not required by most users.
966-
It assumes you are pinning to the latest release tag (i.e. `v0.16.2`), but can
965+
It assumes you are pinning to the latest release tag (i.e. `v0.16.1`), but can
967966
easily be modified to work with any available tag or branch.
968967

969968
### With Docker
970969

971970
#### Build
972971
Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin):
973972
```shell
974-
$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.16.2
975-
$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.16.2 nvcr.io/nvidia/k8s-device-plugin:devel
973+
$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.16.1
974+
$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.16.1 nvcr.io/nvidia/k8s-device-plugin:devel
976975
```
977976

978977
Option 2, build without cloning the repository:
979978
```shell
980979
$ docker build \
981980
-t nvcr.io/nvidia/k8s-device-plugin:devel \
982-
-f deployments/container/Dockerfile \
983-
https://github.com/NVIDIA/k8s-device-plugin.git#v0.16.2
981+
-f deployments/container/Dockerfile.ubuntu \
982+
https://github.com/NVIDIA/k8s-device-plugin.git#v0.16.1
984983
```
985984

986985
Option 3, if you want to modify the code:
987986
```shell
988987
$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin
989988
$ docker build \
990989
-t nvcr.io/nvidia/k8s-device-plugin:devel \
991-
-f deployments/container/Dockerfile \
990+
-f deployments/container/Dockerfile.ubuntu \
992991
.
993992
```
994993

0 commit comments

Comments
 (0)