Skip to content

Commit 55c3bab

Browse files
committed
refactor: Last trtorch references
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent c2bee87 commit 55c3bab

File tree

5 files changed

+26
-27
lines changed

5 files changed

+26
-27
lines changed

examples/custom_converters/README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,13 @@ Note that ELU converter is now supported in our library. If you want to get abov
1616
error and run the example in this document, you can either:
1717
1. get the source code, go to root directory, then run: <br />
1818
`git apply ./examples/custom_converters/elu_converter/disable_core_elu.patch`
19-
2. If you are using a pre-downloaded release of TRTorch, you need to make sure that
20-
it doesn't support elu operator in default. (TRTorch <= v0.1.0)
19+
2. If you are using a pre-downloaded release of Torch-TensorRT, you need to make sure that
20+
it doesn't support elu operator in default. (Torch-TensorRT <= v0.1.0)
2121

2222
## Writing Converter in C++
2323
We can register a converter for this operator in our application. You can find more
2424
information on all the details of writing converters in the contributors documentation
25-
([Writing Converters](https://nvidia.github.io/TRTorch/contributors/writing_converters.html)).
25+
([Writing Converters](https://nvidia.github.io/Torch-TensorRT/contributors/writing_converters.html)).
2626
Once we are clear about these rules and writing patterns, we can create a seperate new C++ source file as:
2727

2828
```c++
@@ -66,7 +66,7 @@ from torch.utils import cpp_extension
6666
6767
6868
# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
69-
# 1) download the latest package from https://github.com/NVIDIA/TRTorch/releases/
69+
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
7070
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
7171
# 3) Set torch_tensorrt_path to that directory
7272
torch_tensorrt_path = <PATH TO TRTORCH>
@@ -87,7 +87,7 @@ setup(
8787
```
8888
Make sure to include the path for header files in `include_dirs` and the path
8989
for dependent libraries in `library_dirs`. Generally speaking, you should download
90-
the latest package from [here](https://github.com/NVIDIA/TRTorch/releases), extract
90+
the latest package from [here](https://github.com/NVIDIA/Torch-TensorRT/releases), extract
9191
the files, and the set the `torch_tensorrt_path` to it. You could also add other compilation
9292
flags in cpp_extension if you need. Then, run above python scripts as:
9393
```shell
@@ -99,7 +99,7 @@ by the command above. In build folder, you can find the generated `.so` library,
9999
which could be loaded in our Python application.
100100

101101
## Load `.so` in Python Application
102-
With the new generated library, TRTorch now support the new developed converter.
102+
With the new generated library, Torch-TensorRT now support the new developed converter.
103103
We use `torch.ops.load_library` to load `.so`. For example, we could load the ELU
104104
converter and use it in our application:
105105
```python
@@ -124,7 +124,7 @@ def cal_max_diff(pytorch_out, torch_tensorrt_out):
124124
diff = torch.sub(pytorch_out, torch_tensorrt_out)
125125
abs_diff = torch.abs(diff)
126126
max_diff = torch.max(abs_diff)
127-
print("Maximum differnce between TRTorch and PyTorch: \n", max_diff)
127+
print("Maximum differnce between Torch-TensorRT and PyTorch: \n", max_diff)
128128

129129

130130
def main():
@@ -146,12 +146,12 @@ def main():
146146

147147
torch_tensorrt_out = trt_ts_module(input_data)
148148
print('PyTorch output: \n', pytorch_out[0, :, :, 0])
149-
print('TRTorch output: \n', torch_tensorrt_out[0, :, :, 0])
149+
print('Torch-TensorRT output: \n', torch_tensorrt_out[0, :, :, 0])
150150
cal_max_diff(pytorch_out, torch_tensorrt_out)
151151

152152

153153
if __name__ == "__main__":
154154
main()
155155

156156
```
157-
Run this script, we can get the different outputs from PyTorch and TRTorch.
157+
Run this script, we can get the different outputs from PyTorch and Torch-TensorRT.

examples/int8/ptq/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/torch_tensorrt/lib:$(pwd)/de
161161

162162
2) Build and run `ptq`
163163

164-
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_TRTORCH>`.
164+
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_torch_tensorrt>`.
165165

166166
By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.
167167

examples/int8/training/vgg16/test_qat.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -79,13 +79,12 @@ def test(model, dataloader, crit):
7979
print("[JIT] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))
8080

8181
import torch_tensorrt as torchtrt
82-
# trtorch.logging.set_reportable_log_level(trtorch.logging.Level.Debug)
8382
compile_settings = {
84-
"inputs": [torchtrt.Input([1, 3, 32, 32])],
85-
"enabled_precisions": {torch.float, torch.half, torch.int8} # Run with FP16
83+
"inputs": [torchtrt.Input([1, 3, 32, 32])],
84+
"enabled_precisions": {torch.float, torch.half, torch.int8} # Run with FP16
8685
}
8786
new_mod = torch.jit.load('trained_vgg16_qat.jit.pt')
88-
trt_ts_module = torchtrt.compile(new_mod, compile_settings)
87+
trt_ts_module = torchtrt.compile(new_mod, **compile_settings)
8988
testing_dataloader = torch.utils.data.DataLoader(testing_dataset, batch_size=1, shuffle=False, num_workers=2)
9089
test_loss, test_acc = test(trt_ts_module, testing_dataloader, crit)
9190
print("[TRTorch] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))

examples/torchtrt_runtime_example/network.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import torch_tensorrt as torchtrt
44

55
# create a simple norm layer.
6-
# This norm layer uses NormalizePlugin from TRTorch
6+
# This norm layer uses NormalizePlugin from Torch-TensorRT
77
class Norm(torch.nn.Module):
88
def __init__(self):
99
super(Norm, self).__init__()
@@ -12,7 +12,7 @@ def forward(self, x):
1212
return torch.norm(x, 2, None, False)
1313

1414
# Create a sample network with a conv and gelu node.
15-
# Gelu layer in TRTorch is converted to CustomGeluPluginDynamic from TensorRT plugin registry.
15+
# Gelu layer in Torch-TensorRT is converted to CustomGeluPluginDynamic from TensorRT plugin registry.
1616
class ConvGelu(torch.nn.Module):
1717
def __init__(self):
1818
super(ConvGelu, self).__init__()

py/BUILD

+11-11
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
11
package(default_visibility = ["//visibility:public"])
22

3-
load("@trtorch_py_deps//:requirements.bzl", "requirement")
3+
load("@torch_tensorrt_py_deps//:requirements.bzl", "requirement")
44

55
# Exposes the library for testing
66
py_library(
7-
name = "trtorch",
7+
name = "torch_tensorrt",
88
srcs = [
9-
"trtorch/__init__.py",
10-
"trtorch/_compile_spec.py",
11-
"trtorch/_compiler.py",
12-
"trtorch/_types.py",
13-
"trtorch/_version.py",
14-
"trtorch/logging.py",
15-
"trtorch/ptq.py",
9+
"torch_tensorrt/__init__.py",
10+
"torch_tensorrt/_compile_spec.py",
11+
"torch_tensorrt/_compiler.py",
12+
"torch_tensorrt/_types.py",
13+
"torch_tensorrt/_version.py",
14+
"torch_tensorrt/logging.py",
15+
"torch_tensorrt/ptq.py",
1616
],
1717
data = [
18-
"trtorch/lib/libtrtorch.so",
18+
"torch_tensorrt/lib/libtrtorch.so",
1919
] + glob([
20-
"trtorch/_C.cpython*.so",
20+
"torch_tensorrt/_C.cpython*.so",
2121
]),
2222
deps = [
2323
requirement("torch"),

0 commit comments

Comments
 (0)