You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This document has instruction for running TensorFlow using Intel® Extension for TensorFlow* in docker container.
6
+
This document has instructions for running TensorFlow using Intel® Extension for TensorFlow* in a Docker container.
7
7
8
8
Assumptions:
9
-
* Host machine installs Intel GPU.
10
-
* Host machine installs Linux kernel that is compatible with GPU drivers.
11
-
* Host machine has Intel GPU driver.
12
-
* Host machine installs Docker software.
9
+
* Host machine contains an Intel GPU.
10
+
* Host machine uses a Linux kernel that is compatible with GPU drivers.
11
+
* Host machine has a compatible Intel GPU driver installed
12
+
* Host machine has Docker software installed.
13
13
14
14
Refer to [Install for XPU](../docs/install/install_for_xpu.md) and [Install for CPU](../docs/install/install_for_cpu.md) for detail.
15
15
16
16
## Binaries Preparation
17
17
18
-
Download and copy Intel® Extension for TensorFlow* wheel into ./models/binaries directory.
18
+
Download and copy Intel® Extension for TensorFlow* wheel into ./models/binaries directory. You can get the intel-extension-for-tensorflow wheel link from https://pypi.org/project/intel-extension-for-tensorflow/#files, and intel-extension-for-tensorflow-lib wheel link from https://pypi.org/project/intel-extension-for-tensorflow-lib/#files.
19
+
20
+
To use Intel® Optimization for Horovod* with the Intel® oneAPI Collective Communications Library (oneCCL), copy Horovod wheel into ./models/binaries as well. You can get the intel-optimization-for-horovod wheel link from https://pypi.org/project/intel-optimization-for-horovod/#files.
19
21
20
22
```
21
23
mkdir ./models/binaries
24
+
cd ./models/binaries
25
+
wget <download link from https://pypi.org/project/intel-extension-for-tensorflow/#files>
26
+
wget <download link from https://pypi.org/project/intel-extension-for-tensorflow-lib/#files>
27
+
wget <download link from https://pypi.org/project/intel-optimization-for-horovod/#files>
22
28
```
23
29
24
-
To use Intel® Optimization for Horovod* with the Intel® oneAPI Collective Communications Library (oneCCL), copy Horovod wheel into ./models/binaries as well.
25
-
26
30
## Usage of Docker Container
27
-
### I. Customize build script
28
-
[build.sh](./build.sh)is provided as docker container build script. While OS version and some software version (such as Python and TensorFlow) is hard coded inside the script. If you prefer to use newer or later version, you can edit this script.
31
+
### I. Customize Build Script
32
+
We provide [build.sh](./build.sh) as the Docker container build script. The OS version and some software versions (such as Python and TensorFlow) are hard coded inside the script. If you're using a different version, you can edit this script.
29
33
30
-
For example, to build docker container with Python 3.10 and TensorFlow 2.13 on Ubuntu 22.04 layer, update [build.sh](./build.sh) as below.
34
+
For example, to build a Docker container with Python 3.10 and TensorFlow 2.13 on an Ubuntu 22.04 layer, update [build.sh](./build.sh) as shown below.
To build the docker container, enter into [docker](./) folder and run below commands:
48
+
To build the Docker container, enter into [docker](./) folder and run below commands:
45
49
46
50
```bash
47
51
./build.sh [xpu/cpu]
48
52
```
49
-
### III. Running container
53
+
### III. Running the Container
50
54
51
-
Run following commands to start docker container. You can use -v option to mount your local directory into container. To make GPU available in the container, attach the GPU to the container using --device /dev/dri option and run the container:
55
+
Run the following commands to start the Docker container. You can use the `-v` option to mount your local directory into the container. To make the GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container:
52
56
53
57
```bash
54
58
IMAGE_NAME=intel-extension-for-tensorflow:xpu
@@ -64,13 +68,13 @@ docker run -v <your-local-dir>:/workspace \
64
68
$IMAGE_NAME bash
65
69
```
66
70
67
-
## Verify if Intel GPU is accessible from TensorFlow
68
-
You are inside container now. Run following command to verify Intel GPU is visible to TensorFlow:
71
+
## Verify That Intel GPU is Accessible From TensorFlow
72
+
You are inside the container now. Run this command to verify the Intel GPU is visible to TensorFlow:
0 commit comments