-
Notifications
You must be signed in to change notification settings - Fork 364
Add support for JetPack 6.2 build #3453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 3 commits
7b35a7a
87082d5
6e21064
919a518
68d48eb
efa9795
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,18 @@ | ||
.. _Torch_TensorRT_in_JetPack_6.1 | ||
.. _Torch_TensorRT_in_JetPack_6.2 | ||
|
||
Overview | ||
################## | ||
|
||
JetPack 6.1 | ||
JetPack 6.2 | ||
--------------------- | ||
Nvida JetPack 6.1 is the latest production release ofJetPack 6. | ||
Nvida JetPack 6.2 is the latest production release ofJetPack 6. | ||
With this release it incorporates: | ||
CUDA 12.6 | ||
TensorRT 10.3 | ||
cuDNN 9.3 | ||
DLFW 24.09 | ||
|
||
You can find more details for the JetPack 6.1: | ||
You can find more details for the JetPack 6.2: | ||
|
||
* https://docs.nvidia.com/jetson/jetpack/release-notes/index.html | ||
* https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html | ||
|
@@ -22,7 +22,7 @@ Prerequisites | |
~~~~~~~~~~~~~~ | ||
|
||
|
||
Ensure your jetson developer kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager: | ||
Ensure your jetson developer kit has been flashed with the latest JetPack 6.2. You can find more details on how to flash Jetson board via sdk-manager: | ||
|
||
* https://developer.nvidia.com/sdk-manager | ||
|
||
|
@@ -57,10 +57,10 @@ Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/: | |
.. code-block:: sh | ||
|
||
# if not exist, download and copy to the directory | ||
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz | ||
tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz | ||
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/ | ||
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/ | ||
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.7.1.0-archive.tar.xz | ||
tar xf libcusparse_lt-linux-aarch64-0.7.1.0-archive.tar.xz | ||
sudo cp -a libcusparse_lt-linux-aarch64-0.7.1.0-archive/include/* /usr/local/cuda/include/ | ||
sudo cp -a libcusparse_lt-linux-aarch64-0.7.1.0-archive/lib/* /usr/local/cuda/lib64/ | ||
|
||
|
||
Build torch_tensorrt | ||
|
@@ -71,7 +71,7 @@ Install bazel | |
|
||
.. code-block:: sh | ||
|
||
wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64 | ||
wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.25.0/bazelisk-linux-arm64 | ||
sudo mv bazelisk-linux-arm64 /usr/bin/bazel | ||
chmod +x /usr/bin/bazel | ||
|
||
|
@@ -86,8 +86,8 @@ Install pip and required python packages: | |
|
||
.. code-block:: sh | ||
|
||
# install pytorch from nvidia jetson distribution: https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch | ||
python -m pip install torch https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl | ||
# install pytorch from nvidia jetson distribution: https://pypi.jetson-ai-lab.dev/jp6/cu126/ | ||
pip3 install torch torchvision torchaudio --index-url https://pypi.jetson-ai-lab.dev/jp6/cu126/ | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looking at the index, there are jp62 builds for PyTorch 2.7.0 and CUDA 12.8? What are the rules behind these builds / Jetson compute stack if you know? My understanding was jp62 was CUDA 12.6 / TensorRT 10.3 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if you look inside index you can see pytorch 2.6.0 stack: Pytorch index for ubuntu 24.04: https://pypi.jetson-ai-lab.dev/jp6/cu128/+simple/torch/ |
||
.. code-block:: sh | ||
|
||
|
@@ -101,9 +101,9 @@ Install pip and required python packages: | |
Build and Install torch_tensorrt wheel file | ||
|
||
|
||
Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0). | ||
Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.2 is from DLFW 24.08/24.09(torch 2.6.0). | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @apbose can you update this with appropriate numbers? also replace |
||
Please make sure to build torch_tensorrt wheel file from source release/2.5 branch | ||
Please make sure to build torch_tensorrt wheel file from source release/2.6 branch | ||
(TODO: lanl to update the branch name once release/ngc branch is available) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We might need to split the toolchain updates from the docs updates since the toolchain needs to land in the release/2.6 branch. |
||
|
||
.. code-block:: sh | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
setuptools==70.2.0 | ||
numpy<2.0.0 | ||
--index-url https://pypi.jetson-ai-lab.dev/jp6/cu126 | ||
setuptools>=70.2.0 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Id like to add this index to the pyproject.toml to support uv as a build tool which I think has the best UX but there's likely some important details we need to think about. cc: @lanluo-nvidia for later |
||
numpy | ||
packaging | ||
pyyaml |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,12 @@ | ||
expecttest==0.1.6 | ||
networkx==2.8.8 | ||
numpy<2.0.0 | ||
--index-url https://pypi.jetson-ai-lab.dev/jp6/cu126 | ||
expecttest>=0.1.6 | ||
networkx>=2.8.8 | ||
numpy | ||
parameterized>=0.2.0 | ||
pytest>=8.2.1 | ||
pytest-xdist>=3.6.1 | ||
pyyaml | ||
transformers | ||
# TODO: currently timm torchvision nvidia-modelopt does not have distributions for jetson | ||
timm | ||
torchvision | ||
# TODO: currently nvidia-modelopt does not have distributions for jetson |
Uh oh!
There was an error while loading. Please reload this page.