Skip to content

Commit 3f57189

Browse files
authored
Merge pull request #61 from NVIDIA/python_api
docs: Update versions and binary sources
2 parents 5f84977 + 3687745 commit 3f57189

13 files changed

+148
-16
lines changed

CHANGELOG.md

+76
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Changelog
2+
3+
## 0.0.2 (2020-05-17)
4+
5+
6+
### Bug Fixes
7+
8+
* **//core/conversion:** Check for calibrator before setting int8 mode ([3afd209](https://github.com/NVIDIA/TRTorch/commit/3afd209))
9+
* **//core/conversion/conversionctx:** Check both tensor and eval maps ([2d65ece](https://github.com/NVIDIA/TRTorch/commit/2d65ece))
10+
* **//core/conversion/converters/impl/element_wise:** Fix broadcast ([a9f33e4](https://github.com/NVIDIA/TRTorch/commit/a9f33e4))
11+
* **//cpp:** Remove deprecated script namespace ([d70760f](https://github.com/NVIDIA/TRTorch/commit/d70760f))
12+
* **//cpp/api:** Better inital condition for the dataloader iterator to ([8d22bdd](https://github.com/NVIDIA/TRTorch/commit/8d22bdd))
13+
* **//cpp/api:** Remove unecessary destructor in ptq class ([fc70267](https://github.com/NVIDIA/TRTorch/commit/fc70267))
14+
* **//cpp/api:** set a default for calibrator ([825be69](https://github.com/NVIDIA/TRTorch/commit/825be69))
15+
* **//cpp/ptq:** remove some logging from ptq app ([b989c7f](https://github.com/NVIDIA/TRTorch/commit/b989c7f))
16+
* Address issues in PR ([cd24f26](https://github.com/NVIDIA/TRTorch/commit/cd24f26))
17+
* **//cpp/ptq:** Tracing model in eval mode wrecks accuracy in Libtorch ([54a24b3](https://github.com/NVIDIA/TRTorch/commit/54a24b3))
18+
* **//docs:** add nojekyll file ([2a02cd5](https://github.com/NVIDIA/TRTorch/commit/2a02cd5))
19+
* **//docs:** fix version links ([11555f7](https://github.com/NVIDIA/TRTorch/commit/11555f7))
20+
* **//py:** Build system issues ([c1de126](https://github.com/NVIDIA/TRTorch/commit/c1de126))
21+
* **//py:** Ignore generated version file ([9e37dc1](https://github.com/NVIDIA/TRTorch/commit/9e37dc1))
22+
* bypass jeykll, also add PR template ([a41c400](https://github.com/NVIDIA/TRTorch/commit/a41c400))
23+
24+
25+
### Features
26+
27+
* **//core/conversion/conversionctx:** Make op precision available at ([78a1c61](https://github.com/NVIDIA/TRTorch/commit/78a1c61))
28+
* **//core/conversion/converters/impl/shuffle:** Implement aten::resize ([353f2d2](https://github.com/NVIDIA/TRTorch/commit/353f2d2))
29+
* **//core/execution:** Type checking for the executor, now is the ([2dd1ba3](https://github.com/NVIDIA/TRTorch/commit/2dd1ba3))
30+
* **//core/lowering:** New freeze model pass and new exception ([4acc3fd](https://github.com/NVIDIA/TRTorch/commit/4acc3fd))
31+
* **//core/quantization:** skeleton of INT8 PTQ calibrator ([dd443a6](https://github.com/NVIDIA/TRTorch/commit/dd443a6))
32+
* **//core/util:** New logging level for Graph Dumping ([90c44b9](https://github.com/NVIDIA/TRTorch/commit/90c44b9))
33+
* **//cpp/api:** Adding max batch size setting ([1b25542](https://github.com/NVIDIA/TRTorch/commit/1b25542))
34+
* **//cpp/api:** Functional Dataloader based PTQ ([f022dfe](https://github.com/NVIDIA/TRTorch/commit/f022dfe))
35+
* **//cpp/api:** Remove the extra includes in the API header ([2f86f84](https://github.com/NVIDIA/TRTorch/commit/2f86f84))
36+
* **//cpp/ptq:** Add a feature to the dataset to use less than the full ([5f36f47](https://github.com/NVIDIA/TRTorch/commit/5f36f47))
37+
* **//cpp/ptq/training:** Training recipe for VGG16 Classifier on ([676bf56](https://github.com/NVIDIA/TRTorch/commit/676bf56))
38+
* **//lowering:** centralize lowering and try to use PyTorch Conv2DBN folding ([fad4a10](https://github.com/NVIDIA/TRTorch/commit/fad4a10))
39+
* **//py:** API now produces valid engines that are consumable by ([72bc1f7](https://github.com/NVIDIA/TRTorch/commit/72bc1f7))
40+
* **//py:** Inital introduction of the Python API ([7088245](https://github.com/NVIDIA/TRTorch/commit/7088245))
41+
* **//py:** Manylinux container and build system for multiple python ([639c2a3](https://github.com/NVIDIA/TRTorch/commit/639c2a3))
42+
* **//py:** Working portable package ([482ef2c](https://github.com/NVIDIA/TRTorch/commit/482ef2c))
43+
* **//tests:** New optional accuracy tests to check INT8 and FP16 ([df74136](https://github.com/NVIDIA/TRTorch/commit/df74136))
44+
* **//cpp/api:** Working INT8 Calibrator, also resolves [#41](https://github.com/NVIDIA/TRTorch/issues/41) ([5c0d737](https://github.com/NVIDIA/TRTorch/commit/5c0d737))
45+
* **aten::flatten:** Adds a converter for aten flatten since MM is the ([d945eb9](https://github.com/NVIDIA/TRTorch/commit/d945eb9))
46+
* **aten::matmul|aten::addmm:** Adds support for aten::matmul and ([c5b6202](https://github.com/NVIDIA/TRTorch/commit/c5b6202))
47+
* Support non cxx11-abi builds for use in python api ([83e0ed6](https://github.com/NVIDIA/TRTorch/commit/83e0ed6))
48+
* **aten::size [static]:** Implement a aten::size converter for static input size ([0548540](https://github.com/NVIDIA/TRTorch/commit/0548540))
49+
* **conv2d_to_convolution:** A pass to map aten::conv2d to _convolution ([2c5c0d5](https://github.com/NVIDIA/TRTorch/commit/2c5c0d5))
50+
51+
52+
53+
## 0.0.1 (2020-03-31)
54+
55+
56+
### Bug Fixes
57+
58+
* **//core/conversion/converters/impl/linear:** In inserting flatten for ([377ad67](https://github.com/NVIDIA/TRTorch/commit/377ad67))
59+
* **//core/conversion/converters/impl/reduce:** Adds support for multiple ([7622a97](https://github.com/NVIDIA/TRTorch/commit/7622a97))
60+
* **//cpp/api:** The actual api was getting stripped, using alwayslink to ([cf4a8aa](https://github.com/NVIDIA/TRTorch/commit/cf4a8aa))
61+
* **//tests:** Forgot to change this path to modules ([89bff0f](https://github.com/NVIDIA/TRTorch/commit/89bff0f))
62+
* **//tests/modules:** Remove an old script ([8be79e1](https://github.com/NVIDIA/TRTorch/commit/8be79e1))
63+
* **//tests/modules:** Remove lenet test and rename generated ([4b58d3b](https://github.com/NVIDIA/TRTorch/commit/4b58d3b))
64+
65+
66+
### Features
67+
68+
* **//core/conversion/conversionctx:** Move inline function to associate ([6ab9814](https://github.com/NVIDIA/TRTorch/commit/6ab9814))
69+
* **//core/conversion/converter/Arg:** Add typechecking to the unwrap ([73bfd4c](https://github.com/NVIDIA/TRTorch/commit/73bfd4c))
70+
* **//core/conversion/converters/impl:** Non dimensional reduce ([ccab7b9](https://github.com/NVIDIA/TRTorch/commit/ccab7b9))
71+
* **//core/conversion/converters/impl/reduce:** adds the rest of TRT's ([956b0c5](https://github.com/NVIDIA/TRTorch/commit/956b0c5))
72+
* **//core/conversion/converters/impl/reduce:** Mean reduce converter ([259aa4c](https://github.com/NVIDIA/TRTorch/commit/259aa4c))
73+
* **CheckMethodOperatorSupport:** A new API which will check the graph ([28ee445](https://github.com/NVIDIA/TRTorch/commit/28ee445)), closes [#26](https://github.com/NVIDIA/TRTorch/issues/26)
74+
* **hardtanh:** Adds support for the the hard tanh operator ([391af52](https://github.com/NVIDIA/TRTorch/commit/391af52))
75+
76+

README.md

+22-1
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,27 @@ auto results = trt_mod.forward({in_tensor});
2525
...
2626
```
2727
28+
```py
29+
import trtorch
30+
31+
...
32+
compile_settings = {
33+
"input_shapes": [
34+
{
35+
"min": [1, 3, 224, 224],
36+
"opt": [1, 3, 512, 512],
37+
"max": [1, 3, 1024, 1024]
38+
}, # For static size [1, 3, 224, 224]
39+
],
40+
"op_precision": torch.half # Run with FP16
41+
}
42+
43+
trt_ts_module = trtorch.compile(torch_script_module, compile_settings)
44+
45+
input_data = input_data.half()
46+
result = trt_ts_module(input_data)
47+
```
48+
2849
> Notes on running in lower precisions:
2950
> - Set precision with extra_info.op_precision
3051
> - The module should be left in FP32 before compilation (FP16 can support half tensor models)
@@ -47,7 +68,7 @@ auto results = trt_mod.forward({in_tensor});
4768
- cuDNN 7.6.5
4869
- TensorRT 7.0.0
4970

50-
## Prebuilt Binaries
71+
## Prebuilt Binaries and Wheel files
5172

5273
Releases: https://github.com/NVIDIA/TRTorch/releases
5374

docs/_cpp_api/program_listing_file_cpp_api_include_trtorch_trtorch.h.html

+3-3
Original file line numberDiff line numberDiff line change
@@ -635,11 +635,11 @@ <h1 id="cpp-api-program-listing-file-cpp-api-include-trtorch-trtorch-h--page-roo
635635

636636
<span class="n">TRTORCH_API</span> <span class="kt">void</span> <span class="nf">dump_build_info</span><span class="p">();</span>
637637

638-
<span class="n">TRTORCH_API</span> <span class="kt">bool</span> <span class="nf">CheckMethodOperatorSupport</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="n">module</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">method_name</span><span class="p">);</span>
638+
<span class="n">TRTORCH_API</span> <span class="kt">bool</span> <span class="nf">CheckMethodOperatorSupport</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="k">module</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">method_name</span><span class="p">);</span>
639639

640-
<span class="n">TRTORCH_API</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span> <span class="n">CompileGraph</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="n">module</span><span class="p">,</span> <span class="n">ExtraInfo</span> <span class="n">info</span><span class="p">);</span>
640+
<span class="n">TRTORCH_API</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span> <span class="n">CompileGraph</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="k">module</span><span class="p">,</span> <span class="n">ExtraInfo</span> <span class="n">info</span><span class="p">);</span>
641641

642-
<span class="n">TRTORCH_API</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">ConvertGraphToTRTEngine</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="n">module</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">method_name</span><span class="p">,</span> <span class="n">ExtraInfo</span> <span class="n">info</span><span class="p">);</span>
642+
<span class="n">TRTORCH_API</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">ConvertGraphToTRTEngine</span><span class="p">(</span><span class="k">const</span> <span class="n">torch</span><span class="o">::</span><span class="n">jit</span><span class="o">::</span><span class="n">Module</span><span class="o">&amp;</span> <span class="k">module</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">method_name</span><span class="p">,</span> <span class="n">ExtraInfo</span> <span class="n">info</span><span class="p">);</span>
643643

644644
<span class="k">namespace</span> <span class="n">ptq</span> <span class="p">{</span>
645645
<span class="k">template</span><span class="o">&lt;</span><span class="k">typename</span> <span class="n">Algorithm</span> <span class="o">=</span> <span class="n">nvinfer1</span><span class="o">::</span><span class="n">IInt8EntropyCalibrator2</span><span class="p">,</span> <span class="k">typename</span> <span class="n">DataLoader</span><span class="o">&gt;</span>

docs/_sources/tutorials/installation.rst.txt

+8-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,14 @@ You can install the python package using
2222

2323
.. code-block:: sh
2424
25-
pip3 install trtorch
25+
# Python 3.5
26+
pip3 install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.2/trtorch-0.0.2-cp35-cp35m-linux_x86_64.whl
27+
# Python 3.6
28+
pip3 install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.2/trtorch-0.0.2-cp36-cp36m-linux_x86_64.whl
29+
# Python 3.7
30+
pip3 install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.2/trtorch-0.0.2-cp37-cp37m-linux_x86_64.whl
31+
# Python 3.8
32+
pip3 install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.2/trtorch-0.0.2-cp38-cp38-linux_x86_64.whl
2633
2734
.. _bin-dist:
2835

docs/searchindex.js

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)