Skip to content

NPU plugin support #2066

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/lib-build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ jobs:
- intel-iaa-plugin
- intel-idxd-config-initcontainer
- intel-xpumanager-sidecar
- intel-npu-plugin

# # Demo images
- crypto-perf
Expand Down
10 changes: 9 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Table of Contents
* [DSA device plugin](#dsa-device-plugin)
* [DLB device plugin](#dlb-device-plugin)
* [IAA device plugin](#iaa-device-plugin)
* [NPU device plugin](#npu-device-plugin)
* [Device Plugins Operator](#device-plugins-operator)
* [XeLink XPU Manager sidecar](#xelink-xpu-manager-sidecar)
* [Intel GPU Level-Zero sidecar](#intel-gpu-levelzero)
Expand Down Expand Up @@ -182,12 +183,17 @@ Balancer accelerator(DLB).
The [IAA device plugin](cmd/iaa_plugin/README.md) supports acceleration using
the Intel Analytics accelerator(IAA).

### NPU Device Plugin

The [NPU device plugin](cmd/npu_plugin/README.md) supports acceleration using
the Intel Neural Processing Unit(NPU).

## Device Plugins Operator

To simplify the deployment of the device plugins, a unified device plugins
operator is implemented.

Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, and
Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, NPU, and
Intel SGX device plugins. Each device plugin has its own custom resource
definition (CRD) and the corresponding controller that watches CRUD operations
to those custom resources.
Expand Down Expand Up @@ -247,6 +253,8 @@ The summary of resources available via plugins in this repository is given in th
* [crypto-perf-dpdk-pod-requesting-qat-cy.yaml](deployments/qat_dpdk_app/crypto-perf/crypto-perf-dpdk-pod-requesting-qat-cy.yaml)
* `sgx.intel.com` : `epc`
* [intelsgx-job.yaml](deployments/sgx_enclave_apps/base/intelsgx-job.yaml)
* `npu.intel.com` : `npu`
* TODO

## Developers

Expand Down
69 changes: 69 additions & 0 deletions build/docker/intel-npu-plugin.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
## This is a generated file, do not edit directly. Edit build/docker/templates/intel-npu-plugin.Dockerfile.in instead.
##
## Copyright 2022 Intel Corporation. All Rights Reserved.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
###
ARG CMD=npu_plugin
## FINAL_BASE can be used to configure the base image of the final image.
##
## This is used in two ways:
## 1) make <image-name> BUILDER=<docker|buildah>
## 2) docker build ... -f <image-name>.Dockerfile
##
## The project default is 1) which sets FINAL_BASE=gcr.io/distroless/static
## (see build-image.sh).
## 2) and the default FINAL_BASE is primarily used to build Redhat Certified Openshift Operator container images that must be UBI based.
## The RedHat build tool does not allow additional image build parameters.
ARG FINAL_BASE=registry.access.redhat.com/ubi9-micro:latest
###
##
## GOLANG_BASE can be used to make the build reproducible by choosing an
## image by its hash:
## GOLANG_BASE=golang@sha256:9d64369fd3c633df71d7465d67d43f63bb31192193e671742fa1c26ebc3a6210
##
## This is used on release branches before tagging a stable version.
## The main branch defaults to using the latest Golang base image.
ARG GOLANG_BASE=golang:1.24-bookworm
###
FROM ${GOLANG_BASE} AS builder
ARG DIR=/intel-device-plugins-for-kubernetes
ARG GO111MODULE=on
ARG LDFLAGS="all=-w -s"
ARG GOFLAGS="-trimpath"
ARG GCFLAGS="all=-spectre=all -N -l"
ARG ASMFLAGS="all=-spectre=all"
ARG GOLICENSES_VERSION
ARG EP=/usr/local/bin/intel_npu_device_plugin
ARG CMD
WORKDIR ${DIR}
COPY . .
RUN (cd cmd/${CMD}; GO111MODULE=${GO111MODULE} GOFLAGS=${GOFLAGS} CGO_ENABLED=0 go install -gcflags="${GCFLAGS}" -asmflags="${ASMFLAGS}" -ldflags="${LDFLAGS}") && install -D /go/bin/${CMD} /install_root${EP}
RUN install -D ${DIR}/LICENSE /install_root/licenses/intel-device-plugins-for-kubernetes/LICENSE \
&& if [ ! -d "licenses/$CMD" ] ; then \
GO111MODULE=on GOROOT=$(go env GOROOT) go run github.com/google/go-licenses@${GOLICENSES_VERSION} save "./cmd/$CMD" \
--save_path /install_root/licenses/$CMD/go-licenses ; \
else mkdir -p /install_root/licenses/$CMD/go-licenses/ && cd licenses/$CMD && cp -r * /install_root/licenses/$CMD/go-licenses/ ; fi && \
echo "Verifying installed licenses" && test -e /install_root/licenses/$CMD/go-licenses
###
FROM ${FINAL_BASE}
COPY --from=builder /install_root /
ENTRYPOINT ["/usr/local/bin/intel_npu_device_plugin"]
LABEL vendor='Intel®'
LABEL org.opencontainers.image.source='https://github.com/intel/intel-device-plugins-for-kubernetes'
LABEL maintainer="Intel®"
LABEL version='devel'
LABEL release='1'
LABEL name='intel-npu-plugin'
LABEL summary='Intel® NPU device plugin for Kubernetes'
LABEL description='The NPU device plugin provides access to Intel CPU neural processing unit (NPU) device files'
8 changes: 8 additions & 0 deletions build/docker/templates/intel-npu-plugin.Dockerfile.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#define _ENTRYPOINT_ /usr/local/bin/intel_npu_device_plugin
ARG CMD=npu_plugin

#include "default_plugin.docker"

LABEL name='intel-npu-plugin'
LABEL summary='Intel® NPU device plugin for Kubernetes'
LABEL description='The NPU device plugin provides access to Intel CPU neural processing unit (NPU) device files'
90 changes: 90 additions & 0 deletions cmd/npu_plugin/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Intel NPU device plugin for Kubernetes

Table of Contents

* [Introduction](#introduction)
* [Modes and Configuration Options](#modes-and-configuration-options)
* [Pre-built Images](#pre-built-images)
* [Installation](#installation)
* [Install with NFD](#install-with-nfd)
* [Install with Operator](#install-with-operator)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)

## Introduction

Intel NPU plugin facilitates Kubernetes workload offloading by providing access to Intel CPU neural processing units supported by the host kernel.

The following CPU families are currently detected by the plugin:
* Core Ultra Series 1
* Core Ultra Series 2
* Core Ultra 200V Series
Comment on lines +19 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO code names like MeteorLake etc are also useful, because that's e.g. how Wikipedia refers to them: https://en.wikipedia.org/wiki/Meteor_Lake

=> Maybe code names could be in parenthesis?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was requested to use the official names. :) But I agree that the codenames are familiar for people, maybe even more familiar.


Intel NPU plugin may register one resource to the Kubernetes cluster:
| Resource | Description |
|:---- |:-------- |
| npu.intel.com/npu | NPU |

## Modes and Configuration Options

| Flag | Argument | Default | Meaning |
|:---- |:-------- |:------- |:------- |
| -shared-dev-num | int | 1 | Number of containers that can share the same NPU device |

The plugin also accepts a number of other arguments (common to all plugins) related to logging.
Please use the -h option to see the complete list of logging related options.

## Pre-built Images

[Pre-built images](https://hub.docker.com/r/intel/intel-npu-plugin)
are available on the Docker hub. These images are automatically built and uploaded
to the hub from the latest main branch of this repository.

Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
repository.

See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.

## Installation

There are multiple ways to install Intel NPU plugin to a cluster. The most common methods are described below.

> **Note**: Replace `<RELEASE_VERSION>` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.

> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the YAML content being applied.

### Install with NFD

Deploy NPU plugin with the help of NFD ([Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)). It detects the presence of Intel NPUs and labels them accordingly. GPU plugin's node selector is used to deploy plugin to nodes which have such a NPU label.

```bash
# Start NFD - if your cluster doesn't have NFD installed yet
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'

# Create NodeFeatureRules for detecting NPUs on nodes
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'

# Create NPU plugin daemonset
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/npu_plugin/overlays/nfd_labeled_nodes?ref=<RELEASE_VERSION>'
```

### Install with Operator

NPU plugin can be installed with the Intel Device Plugin Operator. It allows configuring NPU plugin parameters without kustomizing the deployment files. The general installation is described in the [install documentation](../operator/README.md#installation).

### Verify Plugin Registration

You can verify that the plugin has been installed on the expected nodes by searching for the relevant
resource allocation status on the nodes:

```bash
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\n'}{' npu: '}{.status.allocatable.npu\.intel\.com/npu}{'\n'}"
master
npu: 1
```

## Testing and Demos

TODO

Loading