Skip to content

Latest commit

 

History

History
56 lines (38 loc) · 2.46 KB

README.en.md

File metadata and controls

56 lines (38 loc) · 2.46 KB

English | 简体中文

Video Analysis Example

This example uses the YOLO11n model to demonstrate how to integrate the TensorRT-YOLO Deploy module into VideoPipe for video analysis.

yolo11n.ptdemo0.mp4demo1.mp4

Please download the required yolo11n.pt model file and test video through the provided link, and save both to the workspace folder.

Model Export

Use the following command to export the ONNX format with the EfficientNMS plugin. For detailed trtyolo CLI export methods, please read Model Export:

trtyolo export -w workspace/yolo11n.pt -v yolo11 -o workspace -b 2 -s

After running the above command, a yolo11n.onnx file with a batch_size of 2 will be generated in the models folder. Next, use the trtexec tool to convert the ONNX file to a TensorRT engine (fp16):

trtexec --onnx=workspace/yolo11n.onnx --saveEngine=workspace/yolo11n.engine --fp16

Project Execution

  1. Ensure that the project has been compiled according to the TensorRT-YOLO Compilation.

  2. Ensure that the project has been compiled following the instructions for TensorRT-YOLO compilation and VideoPipe compilation and debugging.

  3. Compile the project into an executable:

    # Compile using xmake
    xmake f -P . --tensorrt="/path/to/your/TensorRT" --deploy="/path/to/your/TensorRT-YOLO" --videopipe="/path/to/your/VideoPipe"
    xmake -P . -r
    
    # Compile using cmake
    mkdir -p build && cd build
    cmake -DTENSORRT_PATH="/path/to/your/TensorRT" -DDEPLOY_PATH="/path/to/your/TensorRT-YOLO" -DVIDEOPIPE_PATH="/path/to/your/VideoPipe" .. 
    cmake --build . -j8 --config Release

    After compilation, the executable file will be generated in the workspace folder of the project root directory.

  4. Run the following command for inference:

    cd workspace
    ./PipeDemo