Skip to content

Commit c2bee87

Browse files
committed
refactor: rename the last bits of the old name
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 3a98a8b commit c2bee87

File tree

18 files changed

+34
-180
lines changed

18 files changed

+34
-180
lines changed

examples/custom_converters/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
# Create a new op in C++, compile it to .so library and load it in Python
22

3-
There are some operators in PyTorch library which are not supported in TRTorch.
3+
There are some operators in PyTorch library which are not supported in Torch-TensorRT.
44
To support these ops, users can register converters for missing ops. For example,
5-
if we try to compile a graph with a build of TRTorch that doesn't support the
5+
if we try to compile a graph with a build of Torch-TensorRT that doesn't support the
66
[ELU](https://pytorch.org/docs/stable/generated/torch.nn.ELU.html) operation,
77
we will get following error:
88

99
> Unable to convert node: %result.2 : Tensor = aten::elu(%x.1, %2, %3, %3) # /home/bowa/.local/lib/python3.6/site-packages/torch/nn/functional.py:1227:17 (conversion.AddLayer)
1010
Schema: aten::elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor)
1111
Converter for aten::elu requested, but no such converter was found.
1212
If you need a converter for this operator, you can try implementing one yourself
13-
or request a converter: https://www.github.com/NVIDIA/TRTorch/issues
13+
or request a converter: https://www.github.com/NVIDIA/Torch-TensorRT/issues
1414

1515
Note that ELU converter is now supported in our library. If you want to get above
1616
error and run the example in this document, you can either:

examples/custom_converters/elu_converter/setup.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,16 @@
44

55

66
# library_dirs should point to the libtrtorch.so, include_dirs should point to the dir that include the headers
7-
# 1) download the latest package from https://github.com/NVIDIA/TRTorch/releases/
7+
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
88
# 2) Extract the file from downloaded package, we will get the "trtorch" directory
99
# 3) Set trtorch_path to that directory
10-
trtorch_path = <PATH TO TRTORCH>
10+
torchtrt_path = <PATH TO TORCHTRT>
1111

1212
ext_modules = [
1313
cpp_extension.CUDAExtension('elu_converter', ['./csrc/elu_converter.cpp'],
14-
library_dirs=[(trtorch_path + "/lib/")],
15-
libraries=["trtorch"],
16-
include_dirs=[trtorch_path + "/include/trtorch/"])
14+
library_dirs=[(torchtrt_path + "/lib/")],
15+
libraries=["torchtrt"],
16+
include_dirs=[torchtrt_path + "/include/torch_tensorrt/"])
1717
]
1818

1919
setup(

examples/int8/ptq/Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
CXX=g++
22
DEP_DIR=$(PWD)/deps
33
CUDA_VERSION?=11.1
4-
ROOT_DIR?="../../../" # path to TRTorch directory (including TRTorch)
4+
ROOT_DIR?="../../../" # path to Torch-TensorRT directory (including Torch-TensorRT)
55
INCLUDE_DIRS=-I$(DEP_DIR)/libtorch/include -I$(ROOT_DIR) -I$(DEP_DIR)/torch_tensorrt/include -I$(DEP_DIR)/libtorch/include/torch/csrc/api/include/ -I/usr/local/cuda-$(CUDA_VERSION)/include -I$(DEP_DIR)/tensorrt/include
66
LIB_DIRS=-L$(DEP_DIR)/torch_tensorrt/lib -L$(DEP_DIR)/libtorch/lib -L/usr/local/cuda-$(CUDA_VERSION)/lib64
77
LIBS=-ltorchtrt -ltorch -ltorch_cuda -ltorch_cpu -ltorch_global_deps -lbackend_with_compiler -lc10 -lc10_cuda -lpthread -lcudart

examples/int8/ptq/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44

55
Post Training Quantization (PTQ) is a technique to reduce the required computational resources for inference while still preserving the accuracy of your model by mapping the traditional FP32 activation space to a reduced INT8 space. TensorRT uses a calibration step which executes your model with sample data from the target domain and track the activations in FP32 to calibrate a mapping to INT8 that minimizes the information loss between FP32 inference and INT8 inference.
66

7-
Users writing TensorRT applications are required to setup a calibrator class which will provide sample data to the TensorRT calibrator. With TRTorch we look to leverage existing infrastructure in PyTorch to make implementing calibrators easier.
7+
Users writing TensorRT applications are required to setup a calibrator class which will provide sample data to the TensorRT calibrator. With Torch-TensorRT we look to leverage existing infrastructure in PyTorch to make implementing calibrators easier.
88

9-
LibTorch provides a `Dataloader` and `Dataset` API which steamlines preprocessing and batching input data. TRTorch uses Dataloaders as the base of a generic calibrator implementation. So you will be able to reuse or quickly implement a `torch::Dataset` for your target domain, place it in a Dataloader and create a INT8 Calibrator from it which you can provide to TRTorch to run INT8 Calibration during compliation of your module.
9+
LibTorch provides a `Dataloader` and `Dataset` API which steamlines preprocessing and batching input data. Torch-TensorRT uses Dataloaders as the base of a generic calibrator implementation. So you will be able to reuse or quickly implement a `torch::Dataset` for your target domain, place it in a Dataloader and create a INT8 Calibrator from it which you can provide to Torch-TensorRT to run INT8 Calibration during compliation of your module.
1010

1111
### Code
1212

@@ -115,7 +115,7 @@ From here not much changes in terms of how to execution works. You are still abl
115115
116116
## Running the Example Application
117117
118-
This is a short example application that shows how to use TRTorch to perform post-training quantization for a module.
118+
This is a short example application that shows how to use Torch-TensorRT to perform post-training quantization for a module.
119119
120120
## Prerequisites
121121
@@ -139,11 +139,11 @@ This will build a binary named `ptq` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/ptq
139139

140140
## Compilation using Makefile
141141

142-
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/TRTorch/releases">TRTorch </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.
142+
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/Torch-TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.
143143

144144
```sh
145145
cd examples/torch_tensorrtrt_example/deps
146-
# Download latest TRTorch release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/TRTorch/releases
146+
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
147147
tar -xvzf libtorch_tensorrt.tar.gz
148148
# unzip libtorch downloaded from pytorch.org
149149
unzip libtorch.zip
@@ -161,9 +161,9 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/torch_tensorrt/lib:$(pwd)/de
161161

162162
2) Build and run `ptq`
163163

164-
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where TRTorch is located `<path_to_TRTORCH>`.
164+
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_TRTORCH>`.
165165

166-
By default it is set to `../../../`. If your TRTorch directory structure is different, please set `ROOT_DIR` accordingly.
166+
By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.
167167

168168
```sh
169169
cd examples/int8/ptq

examples/int8/qat/Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
CXX=g++
22
DEP_DIR=$(PWD)/deps
33
CUDA_VERSION?=11.1
4-
ROOT_DIR?="../../../" # path to TRTorch directory (including TRTorch)
4+
ROOT_DIR?="../../../" # path to Torch-TensorRT directory (including Torch-TensorRT)
55
INCLUDE_DIRS=-I$(DEP_DIR)/libtorch/include -I$(ROOT_DIR) -I$(DEP_DIR)/torch_tensorrt/include -I$(DEP_DIR)/libtorch/include/torch/csrc/api/include/ -I/usr/local/cuda-$(CUDA_VERSION)/include -I$(DEP_DIR)/tensorrt/include
66
LIB_DIRS=-L$(DEP_DIR)/torch_tensorrt/lib -L$(DEP_DIR)/libtorch/lib -L/usr/local/cuda-$(CUDA_VERSION)/lib64
77
LIBS=-ltorchtrt -ltorch -ltorch_cuda -ltorch_cpu -ltorch_global_deps -lbackend_with_compiler -lc10 -lc10_cuda -lpthread -lcudart

examples/int8/qat/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/torch_tensorrt/lib:$(pwd)/de
5555

5656
2) Build and run `qat`
5757

58-
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_TRTORCH>`.
58+
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_torch_tensorrt>`.
5959

6060
By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.
6161

examples/int8/training/vgg16/finetune_qat.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121

2222
from vgg16 import vgg16
2323

24-
PARSER = argparse.ArgumentParser(description="VGG16 example to use with TRTorch PTQ")
24+
PARSER = argparse.ArgumentParser(description="VGG16 example to use with Torch-TensorRT PTQ")
2525
PARSER.add_argument('--epochs', default=100, type=int, help="Number of total epochs to train")
2626
PARSER.add_argument('--enable_qat', action="store_true", help="Enable quantization aware training. This is recommended to perform on a pre-trained model.")
2727
PARSER.add_argument('--batch-size', default=128, type=int, help="Batch size to use when training")

examples/int8/training/vgg16/main.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515

1616
from vgg16 import vgg16
1717

18-
PARSER = argparse.ArgumentParser(description="VGG16 example to use with TRTorch PTQ")
18+
PARSER = argparse.ArgumentParser(description="VGG16 example to use with Torch-TensorRT PTQ")
1919
PARSER.add_argument('--epochs', default=100, type=int, help="Number of total epochs to train")
2020
PARSER.add_argument('--batch-size', default=128, type=int, help="Batch size to use when training")
2121
PARSER.add_argument('--lr', default=0.1, type=float, help="Initial learning rate")

examples/int8/training/vgg16/test_qat.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -78,14 +78,14 @@ def test(model, dataloader, crit):
7878
test_loss, test_acc = test(jit_model, testing_dataloader, crit)
7979
print("[JIT] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))
8080

81-
import trtorch
81+
import torch_tensorrt as torchtrt
8282
# trtorch.logging.set_reportable_log_level(trtorch.logging.Level.Debug)
8383
compile_settings = {
84-
"inputs": [trtorch.Input([1, 3, 32, 32])],
85-
"op_precision": torch.int8 # Run with FP16
84+
"inputs": [torchtrt.Input([1, 3, 32, 32])],
85+
"enabled_precisions": {torch.float, torch.half, torch.int8} # Run with FP16
8686
}
8787
new_mod = torch.jit.load('trained_vgg16_qat.jit.pt')
88-
trt_ts_module = trtorch.compile(new_mod, compile_settings)
88+
trt_ts_module = torchtrt.compile(new_mod, compile_settings)
8989
testing_dataloader = torch.utils.data.DataLoader(testing_dataset, batch_size=1, shuffle=False, num_workers=2)
9090
test_loss, test_acc = test(trt_ts_module, testing_dataloader, crit)
9191
print("[TRTorch] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))

examples/torchtrt_runtime_example/network.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
import torch
22
import torch.nn as nn
3-
import trtorch
3+
import torch_tensorrt as torchtrt
44

55
# create a simple norm layer.
66
# This norm layer uses NormalizePlugin from TRTorch
@@ -30,16 +30,16 @@ def main():
3030
scripted_model = torch.jit.script(model)
3131

3232
compile_settings = {
33-
"inputs": [trtorch.Input([1, 3, 5, 5])],
33+
"inputs": [torchtrt.Input([1, 3, 5, 5])],
3434
"enabled_precisions": {torch.float32}
3535
}
3636

37-
trt_ts_module = trtorch.compile(scripted_model, compile_settings)
37+
trt_ts_module = torchtrt.compile(scripted_model, compile_settings)
3838
torch.jit.save(trt_ts_module, 'conv_gelu.jit')
3939

4040
norm_model = Norm().eval().cuda()
4141
norm_ts_module = torch.jit.script(norm_model)
42-
norm_trt_ts = trtorch.compile(norm_ts_module, compile_settings)
42+
norm_trt_ts = torchtrt.compile(norm_ts_module, compile_settings)
4343
torch.jit.save(norm_trt_ts, 'norm.jit')
4444
print("Generated Torchscript-TRT models.")
4545

py/torch_tensorrt/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
import sys
33

44
if sys.version_info < (3,):
5-
raise Exception("Python 2 has reached end-of-life and is not supported by TRTorch")
5+
raise Exception("Python 2 has reached end-of-life and is not supported by Torch-TensorRT")
66

77
import ctypes
88
import torch

py/torch_tensorrt/csrc/tensorrt_backend.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ c10::IValue preprocess(
6666
for (auto it = method_compile_spec.begin(), end = method_compile_spec.end(); it != end; ++it) {
6767
TORCHTRT_CHECK(
6868
core::CheckMethodOperatorSupport(mod, it->key().toStringRef()),
69-
"Method " << it->key().toStringRef() << "cannot be compiled by TRTorch");
69+
"Method " << it->key().toStringRef() << "cannot be compiled by Torch-TensorRT");
7070
}
7171
return mod._ivalue();
7272
};

py/torch_tensorrt/logging.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ def get_reportable_log_level() -> Level:
5353
"""Get the level required for a message to be printed in the log
5454
5555
Returns:
56-
trtorch.logging.Level: The enum representing the level required to print
56+
torch_tensorrt.logging.Level: The enum representing the level required to print
5757
"""
5858
return Level(_get_reportable_log_level())
5959

@@ -62,7 +62,7 @@ def set_reportable_log_level(level: Level):
6262
"""Set the level required for a message to be printed to the log
6363
6464
Args:
65-
level (trtorch.logging.Level): The enum representing the level required to print
65+
level (torch_tensorrt.logging.Level): The enum representing the level required to print
6666
"""
6767
_set_reportable_log_level(Level._to_internal_level(level))
6868

@@ -92,7 +92,7 @@ def log(level: Level, msg: str):
9292
will only get printed out if Level > reportable_log_level
9393
9494
Args:
95-
level (trtorch.logging.Level): Severity of the message
95+
level (torch_tensorrt.logging.Level): Severity of the message
9696
msg (str): Actual message text
9797
"""
9898
_log(Level._to_internal_level(level), msg)

tests/modules/hub_rnn_transformer.py

-48
This file was deleted.

tests/modules/lstm_scripted.ts

-6.92 KB
Binary file not shown.

tests/modules/lstm_test.py

-53
This file was deleted.

tests/modules/test_multithreaded.cpp

-45
This file was deleted.

tests/util/evaluate_graph.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ namespace tests {
1313
namespace util {
1414

1515
std::vector<torch::jit::IValue> EvaluateGraph(const torch::jit::Block* b, std::vector<torch::jit::IValue> inputs) {
16-
LOG_DEBUG("Running TRTorch Version");
16+
LOG_DEBUG("Running Torch-TensorRT Version");
1717

1818
core::conversion::ConversionCtx* ctx = new core::conversion::ConversionCtx({});
1919

0 commit comments

Comments
 (0)