Skip to content

Commit ff904ce

Browse files
dbkinderDboyqiao
andauthored
doc: review of 1.3 docker README (#2320)
Signed-off-by: David B. Kinder <[email protected]> Co-authored-by: Qiao, Zhefeng <[email protected]>
1 parent c5aa641 commit ff904ce

File tree

1 file changed

+23
-19
lines changed

1 file changed

+23
-19
lines changed

docker/README.md

+23-19
Original file line numberDiff line numberDiff line change
@@ -3,31 +3,35 @@ Intel® Extension for TensorFlow* Docker Container Guide
33

44
## Description
55

6-
This document has instruction for running TensorFlow using Intel® Extension for TensorFlow* in docker container.
6+
This document has instructions for running TensorFlow using Intel® Extension for TensorFlow* in a Docker container.
77

88
Assumptions:
9-
* Host machine installs Intel GPU.
10-
* Host machine installs Linux kernel that is compatible with GPU drivers.
11-
* Host machine has Intel GPU driver.
12-
* Host machine installs Docker software.
9+
* Host machine contains an Intel GPU.
10+
* Host machine uses a Linux kernel that is compatible with GPU drivers.
11+
* Host machine has a compatible Intel GPU driver installed
12+
* Host machine has Docker software installed.
1313

1414
Refer to [Install for XPU](../docs/install/install_for_xpu.md) and [Install for CPU](../docs/install/install_for_cpu.md) for detail.
1515

1616
## Binaries Preparation
1717

18-
Download and copy Intel® Extension for TensorFlow* wheel into ./models/binaries directory.
18+
Download and copy Intel® Extension for TensorFlow* wheel into ./models/binaries directory. You can get the intel-extension-for-tensorflow wheel link from https://pypi.org/project/intel-extension-for-tensorflow/#files, and intel-extension-for-tensorflow-lib wheel link from https://pypi.org/project/intel-extension-for-tensorflow-lib/#files.
19+
20+
To use Intel® Optimization for Horovod* with the Intel® oneAPI Collective Communications Library (oneCCL), copy Horovod wheel into ./models/binaries as well. You can get the intel-optimization-for-horovod wheel link from https://pypi.org/project/intel-optimization-for-horovod/#files.
1921

2022
```
2123
mkdir ./models/binaries
24+
cd ./models/binaries
25+
wget <download link from https://pypi.org/project/intel-extension-for-tensorflow/#files>
26+
wget <download link from https://pypi.org/project/intel-extension-for-tensorflow-lib/#files>
27+
wget <download link from https://pypi.org/project/intel-optimization-for-horovod/#files>
2228
```
2329

24-
To use Intel® Optimization for Horovod* with the Intel® oneAPI Collective Communications Library (oneCCL), copy Horovod wheel into ./models/binaries as well.
25-
2630
## Usage of Docker Container
27-
### I. Customize build script
28-
[build.sh](./build.sh) is provided as docker container build script. While OS version and some software version (such as Python and TensorFlow) is hard coded inside the script. If you prefer to use newer or later version, you can edit this script.
31+
### I. Customize Build Script
32+
We provide [build.sh](./build.sh) as the Docker container build script. The OS version and some software versions (such as Python and TensorFlow) are hard coded inside the script. If you're using a different version, you can edit this script.
2933

30-
For example, to build docker container with Python 3.10 and TensorFlow 2.13 on Ubuntu 22.04 layer, update [build.sh](./build.sh) as below.
34+
For example, to build a Docker container with Python 3.10 and TensorFlow 2.13 on an Ubuntu 22.04 layer, update [build.sh](./build.sh) as shown below.
3135

3236
```bash
3337
IMAGE_NAME=intel-extension-for-tensorflow:cpu-ubuntu
@@ -39,16 +43,16 @@ IMAGE_NAME=intel-extension-for-tensorflow:cpu-ubuntu
3943
-f itex-cpu.Dockerfile .
4044
```
4145

42-
### II. Build the container
46+
### II. Build the Container
4347

44-
To build the docker container, enter into [docker](./) folder and run below commands:
48+
To build the Docker container, enter into [docker](./) folder and run below commands:
4549

4650
```bash
4751
./build.sh [xpu/cpu]
4852
```
49-
### III. Running container
53+
### III. Running the Container
5054

51-
Run following commands to start docker container. You can use -v option to mount your local directory into container. To make GPU available in the container, attach the GPU to the container using --device /dev/dri option and run the container:
55+
Run the following commands to start the Docker container. You can use the `-v` option to mount your local directory into the container. To make the GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container:
5256

5357
```bash
5458
IMAGE_NAME=intel-extension-for-tensorflow:xpu
@@ -64,13 +68,13 @@ docker run -v <your-local-dir>:/workspace \
6468
$IMAGE_NAME bash
6569
```
6670

67-
## Verify if Intel GPU is accessible from TensorFlow
68-
You are inside container now. Run following command to verify Intel GPU is visible to TensorFlow:
71+
## Verify That Intel GPU is Accessible From TensorFlow
72+
You are inside the container now. Run this command to verify the Intel GPU is visible to TensorFlow:
6973

7074
```
7175
python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
7276
```
73-
You should be able to see GPU device in list of devices. Sample output looks like below:
77+
You should see your GPU device in the list of devices. Sample output looks like this:
7478

7579
```
7680
[name: "/device:CPU:0"
@@ -97,4 +101,4 @@ incarnation: 17448926295332318308
97101
physical_device_desc: "device: 1, name: INTEL_XPU, pci bus id: <undefined>"
98102
xla_global_id: -1
99103
]
100-
```
104+
```

0 commit comments

Comments
 (0)