@@ -147,7 +147,7 @@ Once you have configured the options above on all the GPU nodes in your
147
147
cluster, you can enable GPU support by deploying the following Daemonset:
148
148
149
149
``` shell
150
- kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.17.0 /deployments/static/nvidia-device-plugin.yml
150
+ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.17.1 /deployments/static/nvidia-device-plugin.yml
151
151
```
152
152
153
153
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -636,12 +636,12 @@ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
636
636
helm repo update
637
637
` ` `
638
638
639
- Then verify that the latest release (`v0.17.0 `) of the plugin is available :
639
+ Then verify that the latest release (`v0.17.1 `) of the plugin is available :
640
640
641
641
` ` ` shell
642
642
$ helm search repo nvdp --devel
643
643
NAME CHART VERSION APP VERSION DESCRIPTION
644
- nvdp/nvidia-device-plugin 0.17.0 0.17.0 A Helm chart for ...
644
+ nvdp/nvidia-device-plugin 0.17.1 0.17.1 A Helm chart for ...
645
645
` ` `
646
646
647
647
Once this repo is updated, you can begin installing packages from it to deploy
@@ -653,7 +653,7 @@ The most basic installation command without any options is then:
653
653
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
654
654
--namespace nvidia-device-plugin \
655
655
--create-namespace \
656
- --version 0.17.0
656
+ --version 0.17.1
657
657
` ` `
658
658
659
659
**Note:** You only need the to pass the `--devel` flag to `helm search repo`
@@ -662,7 +662,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
662
662
663
663
# ## Configuring the device plugin's `helm` chart
664
664
665
- The `helm` chart for the latest release of the plugin (`v0.17.0 `) includes
665
+ The `helm` chart for the latest release of the plugin (`v0.17.1 `) includes
666
666
a number of customizable values.
667
667
668
668
Prior to `v0.12.0` the most commonly used values were those that had direct
@@ -672,7 +672,7 @@ case of the original values is then to override an option from the `ConfigMap`
672
672
if desired. Both methods are discussed in more detail below.
673
673
674
674
The full set of values that can be set are found here :
675
- [here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.0 /deployments/helm/nvidia-device-plugin/values.yaml).
675
+ [here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.1 /deployments/helm/nvidia-device-plugin/values.yaml).
676
676
677
677
# ### Passing configuration to the plugin via a `ConfigMap`
678
678
@@ -715,7 +715,7 @@ And deploy the device plugin via helm (pointing it at this config file and givin
715
715
716
716
` ` ` shell
717
717
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
718
- --version=0.17.0 \
718
+ --version=0.17.1 \
719
719
--namespace nvidia-device-plugin \
720
720
--create-namespace \
721
721
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -740,7 +740,7 @@ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
740
740
741
741
` ` ` shell
742
742
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
743
- --version=0.17.0 \
743
+ --version=0.17.1 \
744
744
--namespace nvidia-device-plugin \
745
745
--create-namespace \
746
746
--set config.name=nvidia-plugin-configs
@@ -770,7 +770,7 @@ And redeploy the device plugin via helm (pointing it at both configs with a spec
770
770
771
771
` ` ` shell
772
772
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
773
- --version=0.17.0 \
773
+ --version=0.17.1 \
774
774
--namespace nvidia-device-plugin \
775
775
--create-namespace \
776
776
--set config.default=config0 \
@@ -792,7 +792,7 @@ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
792
792
793
793
` ` ` shell
794
794
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
795
- --version=0.17.0 \
795
+ --version=0.17.1 \
796
796
--namespace nvidia-device-plugin \
797
797
--create-namespace \
798
798
--set config.default=config0 \
@@ -878,7 +878,7 @@ runtimeClassName:
878
878
` ` `
879
879
880
880
Please take a look in the
881
- [`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.0 /deployments/helm/nvidia-device-plugin/values.yaml)
881
+ [`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.17.1 /deployments/helm/nvidia-device-plugin/values.yaml)
882
882
file to see the full set of overridable parameters for the device plugin.
883
883
884
884
Examples of setting these options include :
@@ -888,7 +888,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
888
888
889
889
` ` ` shell
890
890
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
891
- --version=0.17.0 \
891
+ --version=0.17.1 \
892
892
--namespace nvidia-device-plugin \
893
893
--create-namespace \
894
894
--set compatWithCPUManager=true \
@@ -900,7 +900,7 @@ Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy`.
900
900
901
901
` ` ` shell
902
902
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
903
- --version=0.17.0 \
903
+ --version=0.17.1 \
904
904
--namespace nvidia-device-plugin \
905
905
--create-namespace \
906
906
--set compatWithCPUManager=true \
@@ -919,7 +919,7 @@ To enable it, simply set `gfd.enabled=true` during helm install.
919
919
920
920
` ` ` shell
921
921
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
922
- --version=0.17.0 \
922
+ --version=0.17.1 \
923
923
--namespace nvidia-device-plugin \
924
924
--create-namespace \
925
925
--set gfd.enabled=true
@@ -977,13 +977,13 @@ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
977
977
helm repo update
978
978
```
979
979
980
- Then verify that the latest release (` v0.17.0 ` ) of the plugin is available
980
+ Then verify that the latest release (` v0.17.1 ` ) of the plugin is available
981
981
(Note that this includes the GFD chart):
982
982
983
983
``` shell
984
984
helm search repo nvdp --devel
985
985
NAME CHART VERSION APP VERSION DESCRIPTION
986
- nvdp/nvidia-device-plugin 0.17.0 0.17.0 A Helm chart for ...
986
+ nvdp/nvidia-device-plugin 0.17.1 0.17.1 A Helm chart for ...
987
987
```
988
988
989
989
Once this repo is updated, you can begin installing packages from it to deploy
@@ -993,7 +993,7 @@ The most basic installation command without any options is then:
993
993
994
994
``` shell
995
995
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
996
- --version 0.17.0 \
996
+ --version 0.17.1 \
997
997
--namespace gpu-feature-discovery \
998
998
--create-namespace \
999
999
--set devicePlugin.enabled=false
@@ -1004,7 +1004,7 @@ the default namespace.
1004
1004
1005
1005
``` shell
1006
1006
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
1007
- --version=0.17.0 \
1007
+ --version=0.17.1 \
1008
1008
--set allowDefaultNamespace=true \
1009
1009
--set nfd.enabled=false \
1010
1010
--set migStrategy=mixed \
@@ -1028,14 +1028,14 @@ Using the default values for the flags:
1028
1028
helm upgrade -i nvdp \
1029
1029
--namespace nvidia-device-plugin \
1030
1030
--create-namespace \
1031
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.17.0 .tgz
1031
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.17.1 .tgz
1032
1032
```
1033
1033
1034
1034
## Building and Running Locally
1035
1035
1036
1036
The next sections are focused on building the device plugin locally and running it.
1037
1037
It is intended purely for development and testing, and not required by most users.
1038
- It assumes you are pinning to the latest release tag (i.e. ` v0.17.0 ` ), but can
1038
+ It assumes you are pinning to the latest release tag (i.e. ` v0.17.1 ` ), but can
1039
1039
easily be modified to work with any available tag or branch.
1040
1040
1041
1041
### With Docker
@@ -1045,8 +1045,8 @@ easily be modified to work with any available tag or branch.
1045
1045
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
1046
1046
1047
1047
``` shell
1048
- docker pull nvcr.io/nvidia/k8s-device-plugin:v0.17.0
1049
- docker tag nvcr.io/nvidia/k8s-device-plugin:v0.17.0 nvcr.io/nvidia/k8s-device-plugin:devel
1048
+ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.17.1
1049
+ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.17.1 nvcr.io/nvidia/k8s-device-plugin:devel
1050
1050
```
1051
1051
1052
1052
Option 2, build without cloning the repository:
@@ -1055,7 +1055,7 @@ Option 2, build without cloning the repository:
1055
1055
docker build \
1056
1056
-t nvcr.io/nvidia/k8s-device-plugin:devel \
1057
1057
-f deployments/container/Dockerfile.ubuntu \
1058
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.17.0
1058
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.17.1
1059
1059
```
1060
1060
1061
1061
Option 3, if you want to modify the code:
0 commit comments