Skip to content

Commit 7ab76bc

Browse files
committed
chore: Docs for v1.0.0
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 8694038 commit 7ab76bc

File tree

343 files changed

+104547
-59
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

343 files changed

+104547
-59
lines changed

CHANGELOG.md

Lines changed: 216 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -576,4 +576,220 @@ Signed-off-by: Naren Dasan <[email protected]>
576576
* Move some lowering passes to graph level logging ([0266f41](https://github.com/NVIDIA/TRTorch/commit/0266f41))
577577
* **//py:** Fix trtorch.Device alternate contructor options ([ac26841](https://github.com/NVIDIA/TRTorch/commit/ac26841))
578578

579+
# 1.0.0 (2021-11-09)
580+
581+
582+
### Bug Fixes
583+
584+
* aten::gelu call was wrong in test ([40bc4e3](https://github.com/NVIDIA/TRTorch/commit/40bc4e3))
585+
* Fix a core partitioning algo bug where non-tensor input segments are not updated correctly ([cc10876](https://github.com/NVIDIA/TRTorch/commit/cc10876))
586+
* Fix modules_as_engines test case to use trt_mod instead of pyt_mod ([282e98a](https://github.com/NVIDIA/TRTorch/commit/282e98a))
587+
* Fix plugin registration macro ([8afab22](https://github.com/NVIDIA/TRTorch/commit/8afab22))
588+
* Fix python API tests for mobilenet v2 ([e5a38ff](https://github.com/NVIDIA/TRTorch/commit/e5a38ff))
589+
* Partial compilation translation to internal settings was incorrect ([648bad3](https://github.com/NVIDIA/TRTorch/commit/648bad3))
590+
* **//py:** Don't crash harshly on import when CUDA is not available ([07e16fd](https://github.com/NVIDIA/TRTorch/commit/07e16fd))
591+
* Renable backtrace and make it less repetitive ([1435845](https://github.com/NVIDIA/TRTorch/commit/1435845))
592+
* **//core/lowering:** Fixes module level fallback recursion ([f94ae8f](https://github.com/NVIDIA/TRTorch/commit/f94ae8f))
593+
* **//core/partitioing:** Fixing support for paritally compiling ([748ecf3](https://github.com/NVIDIA/TRTorch/commit/748ecf3))
594+
* **//docker:** Update docker container build script to use release path ([9982855](https://github.com/NVIDIA/TRTorch/commit/9982855))
595+
* **//py:** Add new dirs to remove during clean ([d2cc1e9](https://github.com/NVIDIA/TRTorch/commit/d2cc1e9))
596+
* **//py:** Fix some api import issues ([840ca89](https://github.com/NVIDIA/TRTorch/commit/840ca89))
597+
* **//py:** Fix trtorch.Device alternate contructor options ([fa08311](https://github.com/NVIDIA/TRTorch/commit/fa08311))
598+
* **//py:** Fix trtorch.Device alternate contructor options ([ac26841](https://github.com/NVIDIA/TRTorch/commit/ac26841))
599+
* Update notebooks with new library name Torch-TensorRT ([8274fd9](https://github.com/NVIDIA/TRTorch/commit/8274fd9))
600+
* **aten::conv1d:** Update namespace, fix typo in dest IR for conv1d ([d53f136](https://github.com/NVIDIA/TRTorch/commit/d53f136))
601+
* **eval:** Rollback 1.11a0 change + namespace issues ([ba743f5](https://github.com/NVIDIA/TRTorch/commit/ba743f5))
602+
* Use scripting instead of tracing for module fallback tests ([32e8b53](https://github.com/NVIDIA/TRTorch/commit/32e8b53))
603+
* Workspace defaults for other apis and centralize cuda api use ([930321e](https://github.com/NVIDIA/TRTorch/commit/930321e))
604+
605+
606+
### Features
607+
608+
* Add functionality for tests to use precompiled libraries ([b5c324a](https://github.com/NVIDIA/TRTorch/commit/b5c324a))
609+
* Add QAT patch which modifies scale factor dtype to INT32 ([4a10673](https://github.com/NVIDIA/TRTorch/commit/4a10673))
610+
* Add TF32 override flag in bazelrc for CI-Testing ([7a0c9a5](https://github.com/NVIDIA/TRTorch/commit/7a0c9a5))
611+
* Add VGG QAT sample notebook which demonstrates end-end workflow for QAT models ([8bf6dd6](https://github.com/NVIDIA/TRTorch/commit/8bf6dd6))
612+
* Augment python package to include bin, lib, include directories ([ddc0685](https://github.com/NVIDIA/TRTorch/commit/ddc0685))
613+
* handle scalar type of size [] in shape_analysis ([fca53ce](https://github.com/NVIDIA/TRTorch/commit/fca53ce))
614+
* support aten::__and__.bool evaluator ([6d73e43](https://github.com/NVIDIA/TRTorch/commit/6d73e43))
615+
* support aten::conv1d and aten::conv_transpose1d ([c8dc6e9](https://github.com/NVIDIA/TRTorch/commit/c8dc6e9))
616+
* support aten::eq.str evaluator ([5643972](https://github.com/NVIDIA/TRTorch/commit/5643972))
617+
* support setting input types of subgraph in fallback, handle Tensor type in evaluated_value_map branch in MarkOutputs ([4778b2b](https://github.com/NVIDIA/TRTorch/commit/4778b2b))
618+
* support truncate_long_and_double in fallback subgraph input type ([0bc3c05](https://github.com/NVIDIA/TRTorch/commit/0bc3c05))
619+
* Update documentation with new library name Torch-TensorRT ([e5f96d9](https://github.com/NVIDIA/TRTorch/commit/e5f96d9))
620+
* Updating the pre_built to prebuilt ([51412c7](https://github.com/NVIDIA/TRTorch/commit/51412c7))
621+
* **//:libtrtorch:** Ship a WORKSPACE file and BUILD file with the ([7ac6f1c](https://github.com/NVIDIA/TRTorch/commit/7ac6f1c))
622+
* **//core/partitioning:** Improved logging and code org for the ([8927e77](https://github.com/NVIDIA/TRTorch/commit/8927e77))
623+
* **//cpp:** Adding example tensors as a way to set input spec ([70a7bb3](https://github.com/NVIDIA/TRTorch/commit/70a7bb3))
624+
* **//py:** Add the git revision to non release builds ([4a0a918](https://github.com/NVIDIA/TRTorch/commit/4a0a918))
625+
* **//py:** Allow example tensors from torch to set shape ([01d525d](https://github.com/NVIDIA/TRTorch/commit/01d525d))
626+
627+
628+
* feat!: Changing the default behavior for selecting the input type ([a234335](https://github.com/NVIDIA/TRTorch/commit/a234335))
629+
* refactor!: Removing deprecated InputRange, op_precision and input_shapes ([621bc67](https://github.com/NVIDIA/TRTorch/commit/621bc67))
630+
* feat(//py)!: Porting forward the API to use kwargs ([17e0e8a](https://github.com/NVIDIA/TRTorch/commit/17e0e8a))
631+
* refactor(//py)!: Kwargs updates and support for shifting internal apis ([2a0d1c8](https://github.com/NVIDIA/TRTorch/commit/2a0d1c8))
632+
* refactor!(//cpp): Inlining partial compilation settings since the ([19ecc64](https://github.com/NVIDIA/TRTorch/commit/19ecc64))
633+
* refactor! : Update default workspace size based on platforms. ([391a4c0](https://github.com/NVIDIA/TRTorch/commit/391a4c0))
634+
* feat!: Turning on partial compilation by default ([52e2f05](https://github.com/NVIDIA/TRTorch/commit/52e2f05))
635+
* refactor!: API level rename ([483ef59](https://github.com/NVIDIA/TRTorch/commit/483ef59))
636+
* refactor!: Changing the C++ api to be snake case ([f34e230](https://github.com/NVIDIA/TRTorch/commit/f34e230))
637+
* refactor! : Update Pytorch version to 1.10 ([cc7d0b7](https://github.com/NVIDIA/TRTorch/commit/cc7d0b7))
638+
* refactor!: Updating bazel version for py build container ([06533fe](https://github.com/NVIDIA/TRTorch/commit/06533fe))
639+
640+
641+
### BREAKING CHANGES
642+
643+
* This removes the InputRange Class and op_precision and
644+
input shape fields which were deprecated in TRTorch v0.4.0
645+
646+
Signed-off-by: Naren Dasan <[email protected]>
647+
Signed-off-by: Naren Dasan <[email protected]>
648+
* This change updates the bazel version
649+
to build Torch-TensorRT to 4.2.1.
650+
651+
This was done since the only version of bazel available
652+
in our build container for python apis is 4.2.1
653+
654+
Signed-off-by: Naren Dasan <[email protected]>
655+
Signed-off-by: Naren Dasan <[email protected]>
656+
* This changes the API for compile settings
657+
from a dictionary of settings to a set of kwargs for the various
658+
compilation functions. This will break existing code. However
659+
there is simple guidance to port forward your code:
660+
661+
Given a dict of valid TRTorch CompileSpec settings
662+
663+
```py
664+
spec = {
665+
"inputs": ...
666+
...
667+
}
668+
```
669+
670+
You can use this same dict with the new APIs by changing your code from:
671+
672+
```py
673+
trtorch.compile(mod, spec)
674+
```
675+
676+
to:
677+
678+
```py
679+
trtorch.compile(mod, **spec)
680+
```
681+
which will unpack the dictionary as arguments to the function
682+
683+
Signed-off-by: Naren Dasan <[email protected]>
684+
Signed-off-by: Naren Dasan <[email protected]>
685+
* This commit changes the APIs from a dictionary of
686+
arguements to a set of kwargs. You can port forward using
687+
688+
```py
689+
trtorch.compile(mod, **spec)
690+
```
691+
692+
Also in preparation for partial compilation to be enabled by default
693+
settings related to torch fallback have been moved to the top level
694+
695+
instead of
696+
697+
```py
698+
"torch_fallback": {
699+
"enabled": True,
700+
"min_block_size" " 3,
701+
"forced_fallback_ops" : ["aten::add"],
702+
"forced_fallback_mods" : ["MySubModule"]
703+
}
704+
```
705+
706+
now there are new settings
707+
708+
```py
709+
require_full_compilation=False,
710+
min_block_size=3,
711+
torch_executed_ops=["aten::add"],
712+
torch_executed_modules=["MySubModule"]
713+
```
714+
715+
Signed-off-by: Naren Dasan <[email protected]>
716+
Signed-off-by: Naren Dasan <[email protected]>
717+
* This commit changes the API for automatic fallback
718+
to inline settings regarding partial compilation in preparation
719+
for it to be turned on by default
720+
721+
Now in the compile spec instead of a `torch_fallback` field with its
722+
associated struct, there are four new fields in the compile spec
723+
724+
```c++
725+
bool require_full_compilation = true;
726+
uint64_t min_block_size = 3;
727+
std::vector<std::string> torch_executed_ops = {};
728+
std::vector<std::string> torch_executed_modules = {};
729+
```
730+
731+
Signed-off-by: Naren Dasan <[email protected]>
732+
Signed-off-by: Naren Dasan <[email protected]>
733+
* This commit sets the default workspace size to 1GB for GPU platforms and 256MB for Jetson Nano/TX1 platforms whose compute capability is < 6.
734+
735+
Signed-off-by: Dheeraj Peri <[email protected]>
736+
737+
Signed-off-by: Dheeraj Peri <[email protected]>
738+
739+
Signed-off-by: Dheeraj Peri <[email protected]>
740+
741+
Signed-off-by: Dheeraj Peri <[email protected]>
742+
743+
Signed-off-by: Dheeraj Peri <[email protected]>
744+
* This commit turns on partial compilation
745+
by default. Unsupported modules will attempt to be
746+
run partially in PyTorch and partially in TensorRT
747+
748+
Signed-off-by: Naren Dasan <[email protected]>
749+
Signed-off-by: Naren Dasan <[email protected]>
750+
* This commit renames the namespaces of all
751+
TRTorch/Torch-TensorRT APIs. Now torchscript specific functions
752+
are segregated into their own torch_tensorrt::torchscript /
753+
torch_tensorrt.ts namespaces. Generic utils will remain in the
754+
torch_tensorrt namespace. Guidance on how to port forward will follow in
755+
the next commits
756+
* This changes the C++ API ::ts
757+
APIs to be snake case and for CompileModules to
758+
become just compile
759+
760+
Signed-off-by: Naren Dasan <[email protected]>
761+
Signed-off-by: Naren Dasan <[email protected]>
762+
* This commit updates the pytorch version to 1.10. To use python API of torch_tensorrt, please upgrade your local pytorch to 1.10 to avoid ABI incompatibility errors. WORKSPACE and requirements files are updated accordingly
763+
764+
Signed-off-by: Dheeraj Peri <[email protected]>
765+
766+
Signed-off-by: Dheeraj Peri <[email protected]>
767+
* This commit changes the default behavior of
768+
the compiler where if the user does not specify an input data
769+
type explicity instead of using the enabled precision, now
770+
the compiler will inspect the model provided to infer the
771+
data type for the input that will not cause an error if
772+
the model was run in torch. In practice this means
773+
774+
- If the weights are in FP32 for the first tensor calculation
775+
then default input type is FP32
776+
- If the weights are in FP16 for the first tensor calculation
777+
then default input type is FP16
778+
- etc.
779+
780+
If the data type cannot be determined the compiler will
781+
default to FP32.
782+
783+
This calculation is done per input tensor so if one input
784+
is inferred to use FP32 and another INT32 then the expected
785+
types will be the same (FP32, INT32)
786+
787+
As was the same before if the user defines the data type
788+
explicitly or provides an example tensor the data type
789+
specified there will be respected
790+
791+
Signed-off-by: Naren Dasan <[email protected]>
792+
Signed-off-by: Naren Dasan <[email protected]>
793+
794+
579795

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedd
9797
| Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4+** |
9898
| Windows / GPU | **Unofficial Support** |
9999
| Linux ppc64le / GPU | - |
100-
| NGC Containers | **Including in PyTorch NGC Containers 21.11+** |
100+
| NGC Containers | **Included in PyTorch NGC Containers 21.11+** |
101101

102102
> Torch-TensorRT will be included in NVIDIA NGC containers (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) starting in 21.11.
103103
@@ -216,7 +216,7 @@ bazel build //:libtorchtrt --compilation_mode=dbg
216216
```
217217

218218
### Native compilation on NVIDIA Jetson AGX
219-
We performed end to end testing on Jetson platform using Jetpack SDK 4.6.
219+
We performed end to end testing on Jetson platform using Jetpack SDK 4.6.
220220

221221
``` shell
222222
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6

docs/_cpp_api/class_view_hierarchy.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,9 @@
157157
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
158158
master
159159
</a>
160+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
161+
v1.0.0
162+
</a>
160163
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
161164
v0.4.1
162165
</a>

docs/_cpp_api/classtorch__tensorrt_1_1DataType.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/classtorch__tensorrt_1_1Device_1_1DeviceType.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8CacheCalibrator.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8Calibrator.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1a282fd3c0b1c3a215148ae372070e1268.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1a31398a6d4d27e28817afb0f0139e909e.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1a35703561b26b1a9d2738ad7d58b27827.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1abd1465eb38256d3f22cc1426b23d516b.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1abe87b341f562fd1cf40b7672e4d759da.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1ad19939408f7be171a74a89928b36eb59.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/define_macros_8h_1adad592a7b1b7eed529cdf6acd584c883.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@
159159
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
160160
master
161161
</a>
162+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
163+
v1.0.0
164+
</a>
162165
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
163166
v0.4.1
164167
</a>

docs/_cpp_api/dir_cpp.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,9 @@
157157
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
158158
master
159159
</a>
160+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
161+
v1.0.0
162+
</a>
160163
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
161164
v0.4.1
162165
</a>

docs/_cpp_api/dir_cpp_include.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,9 @@
157157
<a href="https://nvidia.github.io/Torch-TensorRT/" title="master">
158158
master
159159
</a>
160+
<a href="https://nvidia.github.io/Torch-TensorRT/v1.0.0/" title="v1.0.0">
161+
v1.0.0
162+
</a>
160163
<a href="https://nvidia.github.io/Torch-TensorRT/v0.4.1/" title="v0.4.1">
161164
v0.4.1
162165
</a>

0 commit comments

Comments
 (0)