Skip to content

Commit f5bb54c

Browse files
authored
doc: review of tensorflow serving README (#2321)
Signed-off-by: David B. Kinder <[email protected]>
1 parent feee7bb commit f5bb54c

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

docker/tensorflow-serving/README.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
# Intel® Extension for TensorFlow* Serving Docker Container Guide
1+
# Intel® Extension for TensorFlow* Serving - Docker Container Guide
22

33
## Description
44

5-
This document has instruction for running TensorFlow Serving using Intel® Extension for TensorFlow* in docker container.
5+
This document has instruction for running TensorFlow Serving using Intel® Extension for TensorFlow* in a Docker container.
66

77
## Build the Docker Image
88

9-
To build the docker container, enter into [docker/tensorflow-serving](./) folder and follow the below steps.
9+
To build the docker container, enter into [docker/tensorflow-serving](./) folder and follow these steps.
1010

1111
### I. Binaries Preparation
1212

13-
Refer to [Install for Tensorflow Serving](../../docs/guide/tensorflow_serving.md) to build Tensorflow Serving binary, and refer to [Install for CPP](../../docs/install/install_for_cpp.md) to build Intel® Extension for TensorFlow* CC library form source. And then package and copy them into ./models/binaries directory.
13+
Refer to [Install for Tensorflow Serving](../../docs/guide/tensorflow_serving.md) to build the TensorFlow Serving binary, and refer to [Install for CPP](../../docs/install/install_for_cpp.md) to build the Intel® Extension for TensorFlow* CC library from source. Then package and copy these binaries into the `./models/binaries` directory, as shown below.
1414

1515
```bash
1616
mkdir -p ./models/binaries
@@ -21,29 +21,29 @@ cp -r <path_to_itex>/bazel-out/k8-opt-ST-*/bin/ itex-bazel-bin/
2121
tar cvfh itex-bazel-bin.tar itex-bazel-bin/
2222
cp itex-bazel-bin.tar ./models/binaries/
2323

24-
# Copy Tensorflow Serving binary
25-
cp path_to_tensorflow_serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server ./models/binaries/
24+
# Copy TensorFlow Serving binary
25+
cp <path_to_tensorflow_serving>/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server ./models/binaries/
2626

2727
```
2828

2929
### II. Build the Container
3030

31-
If you build the container using Intel GPU, make sure you meet below assumptions:
31+
If you build the container using an Intel GPU, make sure you meet these assumptions:
3232

33-
* Host machine installs Intel GPU.
34-
* Host machine installs Linux kernel that is compatible with GPU drivers.
35-
* Host machine has Intel GPU driver.
33+
* Host machine has an Intel GPU.
34+
* Host machine uses a Linux kernel that is compatible with GPU drivers.
35+
* Host machine has a compatible Intel GPU driver installed.
3636

3737
Refer to [Install for GPU](../docs/install/install_for_xpu.md) for detail.
3838

39-
Run the [build.sh](./build.sh) to build target docker image.
39+
Run the [build.sh](./build.sh), specifying either `gpu` or `cpu` as appropriate, to build the target Docker image.
4040
```bash
4141
./build.sh [gpu/cpu]
4242
```
4343

4444
## Running the Container
4545

46-
Run following commands to start docker container. You can use `-v` option to mount your local directory into container. To make GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container:
46+
Run these commands to start the Docker container. You can use the `-v` option to mount your local directory into container. To make a GPU available in the container, attach the GPU to the container using the `--device /dev/dri` option and run the container:
4747

4848
```
4949
IMAGE_NAME=intel-extension-for-tensorflow:serving-gpu

0 commit comments

Comments
 (0)