@@ -16,13 +16,13 @@ Note that ELU converter is now supported in our library. If you want to get abov
16
16
error and run the example in this document, you can either:
17
17
1 . get the source code, go to root directory, then run: <br />
18
18
` git apply ./examples/custom_converters/elu_converter/disable_core_elu.patch `
19
- 2 . If you are using a pre-downloaded release of TRTorch , you need to make sure that
20
- it doesn't support elu operator in default. (TRTorch <= v0.1.0)
19
+ 2 . If you are using a pre-downloaded release of Torch-TensorRT , you need to make sure that
20
+ it doesn't support elu operator in default. (Torch-TensorRT <= v0.1.0)
21
21
22
22
## Writing Converter in C++
23
23
We can register a converter for this operator in our application. You can find more
24
24
information on all the details of writing converters in the contributors documentation
25
- ([ Writing Converters] ( https://nvidia.github.io/TRTorch /contributors/writing_converters.html ) ).
25
+ ([ Writing Converters] ( https://nvidia.github.io/Torch-TensorRT /contributors/writing_converters.html ) ).
26
26
Once we are clear about these rules and writing patterns, we can create a seperate new C++ source file as:
27
27
28
28
``` c++
@@ -66,7 +66,7 @@ from torch.utils import cpp_extension
66
66
67
67
68
68
# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
69
- # 1) download the latest package from https://github.com/NVIDIA/TRTorch /releases/
69
+ # 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT /releases/
70
70
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
71
71
# 3) Set torch_tensorrt_path to that directory
72
72
torch_tensorrt_path = <PATH TO TRTORCH>
87
87
```
88
88
Make sure to include the path for header files in ` include_dirs ` and the path
89
89
for dependent libraries in ` library_dirs ` . Generally speaking, you should download
90
- the latest package from [ here] ( https://github.com/NVIDIA/TRTorch /releases ) , extract
90
+ the latest package from [ here] ( https://github.com/NVIDIA/Torch-TensorRT /releases ) , extract
91
91
the files, and the set the ` torch_tensorrt_path ` to it. You could also add other compilation
92
92
flags in cpp_extension if you need. Then, run above python scripts as:
93
93
``` shell
@@ -99,7 +99,7 @@ by the command above. In build folder, you can find the generated `.so` library,
99
99
which could be loaded in our Python application.
100
100
101
101
## Load ` .so ` in Python Application
102
- With the new generated library, TRTorch now support the new developed converter.
102
+ With the new generated library, Torch-TensorRT now support the new developed converter.
103
103
We use ` torch.ops.load_library ` to load ` .so ` . For example, we could load the ELU
104
104
converter and use it in our application:
105
105
``` python
@@ -124,7 +124,7 @@ def cal_max_diff(pytorch_out, torch_tensorrt_out):
124
124
diff = torch.sub(pytorch_out, torch_tensorrt_out)
125
125
abs_diff = torch.abs(diff)
126
126
max_diff = torch.max(abs_diff)
127
- print (" Maximum differnce between TRTorch and PyTorch: \n " , max_diff)
127
+ print (" Maximum differnce between Torch-TensorRT and PyTorch: \n " , max_diff)
128
128
129
129
130
130
def main ():
@@ -146,12 +146,12 @@ def main():
146
146
147
147
torch_tensorrt_out = trt_ts_module(input_data)
148
148
print (' PyTorch output: \n ' , pytorch_out[0 , :, :, 0 ])
149
- print (' TRTorch output: \n ' , torch_tensorrt_out[0 , :, :, 0 ])
149
+ print (' Torch-TensorRT output: \n ' , torch_tensorrt_out[0 , :, :, 0 ])
150
150
cal_max_diff(pytorch_out, torch_tensorrt_out)
151
151
152
152
153
153
if __name__ == " __main__" :
154
154
main()
155
155
156
156
```
157
- Run this script, we can get the different outputs from PyTorch and TRTorch .
157
+ Run this script, we can get the different outputs from PyTorch and Torch - TensorRT .
0 commit comments