diff --git a/HOWTO.md b/HOWTO.md
index e8916bc..f50f886 100644
--- a/HOWTO.md
+++ b/HOWTO.md
@@ -15,10 +15,10 @@ This guide provides resources for DeepStream application development in Python.
## Prerequisites
-* Ubuntu 18.04
-* [DeepStream SDK 6.0.1](https://developer.nvidia.com/deepstream-download) or later
-* Python 3.6+
-* [Gst Python](https://gstreamer.freedesktop.org/modules/gst-python.html) v1.14.5
+* Ubuntu 20.04
+* [DeepStream SDK 6.1](https://developer.nvidia.com/deepstream-download) or later
+* Python 3.8
+* [Gst Python](https://gstreamer.freedesktop.org/modules/gst-python.html) v1.16.2
Gst python should be already installed on Jetson.
If missing, install with the following steps:
@@ -28,7 +28,7 @@ If missing, install with the following steps:
$ export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
$ git clone https://github.com/GStreamer/gst-python.git
$ cd gst-python
- $ git checkout 1a8f48a
+ $ git checkout 5343aeb
$ ./autogen.sh PYTHON=python3
$ ./configure PYTHON=python3
$ make
@@ -45,11 +45,11 @@ Note: Compiling bindings now also generates a pip installable python wheel for t
## Running Sample Applications
-Clone the deepstream_python_apps repo under /sources:
+Clone the deepstream_python_apps repo under /sources:
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
This will create the following directory:
-```/sources/deepstream_python_apps```
+```/sources/deepstream_python_apps```
The Python apps are under the "apps" directory.
Go into each app directory and follow instructions in the README.
diff --git a/README.md b/README.md
index 8982bf4..ef8a28e 100644
--- a/README.md
+++ b/README.md
@@ -2,9 +2,11 @@
This repository contains Python bindings and sample applications for the [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk).
-SDK version supported: 6.0.1
+SDK version supported: 6.1
-NEW: The bindings sources along with build instructions are now available under [bindings](bindings)!
+The bindings sources along with build instructions are now available under [bindings](bindings)!
+
+This release comes with Operating System upgrades (from Ubuntu 18.04 to Ubuntu 20.04) for DeepStreamSDK 6.1 support. This translates to upgrade in Python version to 3.8 and [gst-python](3rdparty/gst-python/) version has also been upgraded to 1.16.2 !
Download the latest release package complete with bindings and sample applications from the [release section](../../releases).
@@ -41,7 +43,7 @@ To run the sample applications or write your own, please consult the [HOW-TO Gui
We currently provide the following sample applications:
* [deepstream-test1](apps/deepstream-test1) -- 4-class object detection pipeline
* [deepstream-test2](apps/deepstream-test2) -- 4-class object detection, tracking and attribute classification pipeline
-* [deepstream-test3](apps/deepstream-test3) -- multi-stream pipeline performing 4-class object detection
+* UPDATE [deepstream-test3](apps/deepstream-test3) -- multi-stream pipeline performing 4-class object detection - now also supports triton inference server, no-display mode, file-loop and silent mode
* [deepstream-test4](apps/deepstream-test4) -- msgbroker for sending analytics results to the cloud
* [deepstream-imagedata-multistream](apps/deepstream-imagedata-multistream) -- multi-stream pipeline with access to image buffers
* [deepstream-ssd-parser](apps/deepstream-ssd-parser) -- SSD model inference via Triton server with output parsing in Python
@@ -53,6 +55,7 @@ We currently provide the following sample applications:
* [runtime_source_add_delete](apps/runtime_source_add_delete) -- add/delete source streams at runtime
* [deepstream-imagedata-multistream-redaction](apps/deepstream-imagedata-multistream-redaction) -- multi-stream pipeline with face detection and redaction
* [deepstream-rtsp-in-rtsp-out](apps/deepstream-rtsp-in-rtsp-out) -- multi-stream pipeline with RTSP input/output
+* NEW [deepstream-preprocess-test](apps/deepstream-preprocess-test) -- multi-stream pipeline using nvdspreprocess plugin with custom ROIs
Detailed application information is provided in each application's subdirectory under [apps](apps).
diff --git a/apps/README b/apps/README
index 310cf84..27c17fe 100644
--- a/apps/README
+++ b/apps/README
@@ -19,27 +19,17 @@
DeepStream SDK Python Bindings
================================================================================
Setup pre-requisites:
-- Ubuntu 18.04
-- NVIDIA DeepStream SDK 6.0.1
-- Python 3.6
+- Ubuntu 20.04
+- NVIDIA DeepStream SDK 6.1
+- Python 3.8
- Gst-python
--------------------------------------------------------------------------------
Package Contents
--------------------------------------------------------------------------------
-The DeepStream Python package includes:
-1. Python bindings for DeepStream Metadata libraries
- These bindings are installed as part of the SDK at:
- /opt/nvidia/deepstream/deepstream/lib/pyds.so
-
- Sample applications that import is_aarch_64 automatically
- have this path added.
+1. DeepStream Python bindings located in bindings dir
+ with installation instructions in bindings/README.md
- A setup.py is also provided to install this extension into standard path.
- Currently this needs to be run manually:
- $ cd /opt/nvidia/deepstream/deepstream/lib
- $ python3 setup.py install
-
2. DeepStream test apps in Python
The following test apps are available:
deepstream-test1
@@ -47,24 +37,28 @@ The DeepStream Python package includes:
deepstream-test3
deepstream-test4
deepstream-imagedata-multistream
+ deepstream-imagedata-multistream-redaction
deepstream-ssd-parser
deepstream-test1-rtsp-out
+ deepstream-rtsp-in-rtsp-out
deepstream-test1-usbcam
deepstream-opticalflow
deepstream-segmentation
deepstream-nvdsanalytics
+ deepstream-preprocess-test
+ runtime_source_add_delete
--------------------------------------------------------------------------------
Installing Pre-requisites:
--------------------------------------------------------------------------------
-DeepStream SDK 6.0.1
+DeepStream SDK 6.1
--------------------
Download and install from https://developer.nvidia.com/deepstream-download
-Python 3.6
+Python 3.8
----------
-Should be already installed with Ubuntu 18.04
+Should be already installed with Ubuntu 20.04
Gst-python
----------
@@ -76,7 +70,7 @@ $ sudo apt install python3-gi python3-dev python3-gst-1.0 -y
--------------------------------------------------------------------------------
Running the samples
--------------------------------------------------------------------------------
-The apps are configured to work from inside the DeepStream SDK 6.0.1 installation.
+The apps are configured to work from inside the DeepStream SDK 6.1 installation.
Clone the deepstream_python_apps repo under /sources:
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
diff --git a/apps/common/FPS.py b/apps/common/FPS.py
index 8a3e76a..3a31823 100644
--- a/apps/common/FPS.py
+++ b/apps/common/FPS.py
@@ -16,30 +16,52 @@
################################################################################
import time
+from threading import Lock
start_time=time.time()
-frame_count=0
+
+fps_mutex = Lock()
class GETFPS:
def __init__(self,stream_id):
global start_time
self.start_time=start_time
self.is_first=True
- global frame_count
- self.frame_count=frame_count
+ self.frame_count=0
self.stream_id=stream_id
- def get_fps(self):
- end_time=time.time()
- if(self.is_first):
- self.start_time=end_time
- self.is_first=False
- if(end_time-self.start_time>5):
- print("**********************FPS*****************************************")
- print("Fps of stream",self.stream_id,"is ", float(self.frame_count)/5.0)
- self.frame_count=0
- self.start_time=end_time
+
+ def update_fps(self):
+ end_time = time.time()
+ if self.is_first:
+ self.start_time = end_time
+ self.is_first = False
else:
- self.frame_count=self.frame_count+1
+ global fps_mutex
+ with fps_mutex:
+ self.frame_count = self.frame_count + 1
+
+ def get_fps(self):
+ end_time = time.time()
+ with fps_mutex:
+ stream_fps = float(self.frame_count/(end_time - self.start_time))
+ self.frame_count = 0
+ self.start_time = end_time
+ return round(stream_fps, 2)
+
def print_data(self):
print('frame_count=',self.frame_count)
print('start_time=',self.start_time)
+class PERF_DATA:
+ def __init__(self, num_streams=1):
+ self.perf_dict = {}
+ self.all_stream_fps = {}
+ for i in range(num_streams):
+ self.all_stream_fps["stream{0}".format(i)]=GETFPS(i)
+
+ def perf_print_callback(self):
+ self.perf_dict = {stream_index:stream.get_fps() for (stream_index, stream) in self.all_stream_fps.items()}
+ print ("\n**PERF: ", self.perf_dict, "\n")
+ return True
+
+ def update_fps(self, stream_index):
+ self.all_stream_fps[stream_index].update_fps()
diff --git a/apps/common/bus_call.py b/apps/common/bus_call.py
index 3412b5e..37073c8 100644
--- a/apps/common/bus_call.py
+++ b/apps/common/bus_call.py
@@ -18,7 +18,7 @@
import gi
import sys
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import Gst
def bus_call(bus, message, loop):
t = message.type
if t == Gst.MessageType.EOS:
diff --git a/apps/deepstream-imagedata-multistream-redaction/README b/apps/deepstream-imagedata-multistream-redaction/README
index daad6c8..08335a2 100755
--- a/apps/deepstream-imagedata-multistream-redaction/README
+++ b/apps/deepstream-imagedata-multistream-redaction/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
@@ -39,10 +39,8 @@ Yet, we need to install the introspection typelib package:
$ sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
Download Peoplenet model:
- $ cd /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models
- $ mkdir -p ../../models/tao_pretrained_models/peoplenet && wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.1/files/resnet34_peoplenet_pruned.etlt \
- -O ../../models/tao_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
- $ wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.1/files/labels.txt -O ../../../configs/tao_pretrained_models/labels_peoplenet.txt
+ Please follow instructions from the README.md located at : /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README.md
+ to download latest supported peoplenet model (V2.5 for this release)
To run:
$ python3 deepstream_imagedata-multistream_redaction.py -i [uri2] ... [uriN] -c {H264,H265} -b BITRATE
diff --git a/apps/deepstream-imagedata-multistream-redaction/config_infer_primary_peoplenet.txt b/apps/deepstream-imagedata-multistream-redaction/config_infer_primary_peoplenet.txt
index 89963cd..8f9fbf3 100644
--- a/apps/deepstream-imagedata-multistream-redaction/config_infer_primary_peoplenet.txt
+++ b/apps/deepstream-imagedata-multistream-redaction/config_infer_primary_peoplenet.txt
@@ -19,9 +19,10 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
-tlt-encoded-model=../../../../samples/models/tao_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
-labelfile-path=../../../../samples/configs/tao_pretrained_models/labels_peoplenet.txt
-model-engine-file=../../../../samples/models/tao_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
+tlt-encoded-model=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt
+labelfile-path=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/labels.txt
+model-engine-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
+int8-calib-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.txt
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
diff --git a/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py b/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py
index e6f52bd..1814f25 100644
--- a/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py
+++ b/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py
@@ -26,8 +26,7 @@
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
-from gi.repository import GObject, Gst, GstRtspServer
-from gi.repository import GLib
+from gi.repository import GLib, Gst, GstRtspServer
from ctypes import *
import time
import sys
@@ -36,7 +35,7 @@
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
-from common.FPS import GETFPS
+from common.FPS import PERF_DATA
import numpy as np
import pyds
import cv2
@@ -44,7 +43,7 @@
import os.path
from os import path
-fps_streams = {}
+perf_data = None
frame_count = {}
saved_count = {}
global PGIE_CLASS_ID_PERSON
@@ -152,8 +151,10 @@ def tiler_sink_pad_buffer_probe(pad, info, u_data):
print("Frame Number=", frame_number, "Number of Objects=", num_rects, "Face_count=",
obj_counter[PGIE_CLASS_ID_FACE], "Person_count=", obj_counter[PGIE_CLASS_ID_PERSON])
- # Get frame rate through this probe
- fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
+ # Update frame rate through this probe
+ stream_index = "stream{0}".format(frame_meta.pad_index)
+ global perf_data
+ perf_data.update_fps(stream_index)
if save_image:
img_path = "{}/stream_{}/frame_{}.jpg".format(folder_name, frame_meta.pad_index, frame_number)
cv2.imwrite(img_path, frame_copy)
@@ -247,12 +248,11 @@ def create_source_bin(index, uri):
return None
return nbin
-
def main(uri_inputs,codec,bitrate ):
# Check input arguments
number_sources = len(uri_inputs)
- for i in range(0, number_sources ):
- fps_streams["stream{0}".format(i)] = GETFPS(i)
+ global perf_data
+ perf_data = PERF_DATA(number_sources)
global folder_name
folder_name = "out_crops"
@@ -264,7 +264,6 @@ def main(uri_inputs,codec,bitrate ):
os.mkdir(folder_name)
print("Frames will be saved in ", folder_name)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -352,7 +351,7 @@ def main(uri_inputs,codec,bitrate ):
if is_aarch64():
encoder.set_property('preset-level', 1)
encoder.set_property('insert-sps-pps', 1)
- encoder.set_property('bufapi-version', 1)
+ #encoder.set_property('bufapi-version', 1)
# Make the payload-encode video into RTP packets
if codec == "H264":
@@ -430,7 +429,7 @@ def main(uri_inputs,codec,bitrate ):
rtppay.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
@@ -454,6 +453,8 @@ def main(uri_inputs,codec,bitrate ):
sys.stderr.write(" Unable to get sink pad \n")
else:
tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
+ # perf callback function to print fps every 5 sec
+ GLib.timeout_add(5000, perf_data.perf_print_callback)
print("Starting pipeline \n")
diff --git a/apps/deepstream-imagedata-multistream/README b/apps/deepstream-imagedata-multistream/README
index 8df8b04..8c07500 100755
--- a/apps/deepstream-imagedata-multistream/README
+++ b/apps/deepstream-imagedata-multistream/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
diff --git a/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py b/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
index 24f90f5..b8ef50d 100755
--- a/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
+++ b/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py
@@ -24,8 +24,7 @@
import configparser
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
-from gi.repository import GLib
+from gi.repository import GLib, Gst
from ctypes import *
import time
import sys
@@ -33,7 +32,7 @@
import platform
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
-from common.FPS import GETFPS
+from common.FPS import PERF_DATA
import numpy as np
import pyds
import cv2
@@ -41,7 +40,7 @@
import os.path
from os import path
-fps_streams = {}
+perf_data = None
frame_count = {}
saved_count = {}
global PGIE_CLASS_ID_VEHICLE
@@ -137,8 +136,10 @@ def tiler_sink_pad_buffer_probe(pad, info, u_data):
print("Frame Number=", frame_number, "Number of Objects=", num_rects, "Vehicle_count=",
obj_counter[PGIE_CLASS_ID_VEHICLE], "Person_count=", obj_counter[PGIE_CLASS_ID_PERSON])
- # Get frame rate through this probe
- fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
+ # update frame rate through this probe
+ stream_index = "stream{0}".format(frame_meta.pad_index)
+ global perf_data
+ perf_data.update_fps(stream_index)
if save_image:
img_path = "{}/stream_{}/frame_{}.jpg".format(folder_name, frame_meta.pad_index, frame_number)
cv2.imwrite(img_path, frame_copy)
@@ -210,7 +211,9 @@ def decodebin_child_added(child_proxy, Object, name, user_data):
Object.connect("child-added", decodebin_child_added, user_data)
if "source" in name:
- Object.set_property("drop-on-latency", True)
+ source_element = child_proxy.get_by_name("source")
+ if source_element.find_property('drop-on-latency') != None:
+ Object.set_property("drop-on-latency", True)
def create_source_bin(index, uri):
print("Creating source bin")
@@ -248,15 +251,14 @@ def create_source_bin(index, uri):
return None
return nbin
-
def main(args):
# Check input arguments
if len(args) < 2:
sys.stderr.write("usage: %s [uri2] ... [uriN] \n" % args[0])
sys.exit(1)
- for i in range(0, len(args) - 2):
- fps_streams["stream{0}".format(i)] = GETFPS(i)
+ global perf_data
+ perf_data = PERF_DATA(len(args) - 2)
number_sources = len(args) - 2
global folder_name
@@ -268,7 +270,6 @@ def main(args):
os.mkdir(folder_name)
print("Frames will be saved in ", folder_name)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -404,7 +405,7 @@ def main(args):
nvosd.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
@@ -414,6 +415,8 @@ def main(args):
sys.stderr.write(" Unable to get src pad \n")
else:
tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
+ # perf callback function to print fps every 5 sec
+ GLib.timeout_add(5000, perf_data.perf_print_callback)
# List the sources
print("Now playing...")
diff --git a/apps/deepstream-nvdsanalytics/README b/apps/deepstream-nvdsanalytics/README
index 99649d5..fc83057 100755
--- a/apps/deepstream-nvdsanalytics/README
+++ b/apps/deepstream-nvdsanalytics/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
To run:
diff --git a/apps/deepstream-nvdsanalytics/deepstream_nvdsanalytics.py b/apps/deepstream-nvdsanalytics/deepstream_nvdsanalytics.py
index 8270de3..8dc7bb5 100755
--- a/apps/deepstream-nvdsanalytics/deepstream_nvdsanalytics.py
+++ b/apps/deepstream-nvdsanalytics/deepstream_nvdsanalytics.py
@@ -22,8 +22,7 @@
import gi
import configparser
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
-from gi.repository import GLib
+from gi.repository import GLib, Gst
from ctypes import *
import time
import sys
@@ -31,11 +30,11 @@
import platform
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
-from common.FPS import GETFPS
+from common.FPS import PERF_DATA
import pyds
-fps_streams={}
+perf_data = None
MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_VEHICLE = 0
@@ -139,8 +138,10 @@ def nvanalytics_src_pad_buffer_probe(pad,info,u_data):
break
print("Frame Number=", frame_number, "stream id=", frame_meta.pad_index, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
- # Get frame rate through this probe
- fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
+ # Update frame rate through this probe
+ stream_index = "stream{0}".format(frame_meta.pad_index)
+ global perf_data
+ perf_data.update_fps(stream_index)
try:
l_frame=l_frame.next
except StopIteration:
@@ -222,12 +223,11 @@ def main(args):
sys.stderr.write("usage: %s [uri2] ... [uriN]\n" % args[0])
sys.exit(1)
- for i in range(0,len(args)-1):
- fps_streams["stream{0}".format(i)]=GETFPS(i)
+ global perf_data
+ perf_data = PERF_DATA(len(args) - 1)
number_sources=len(args)-1
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -408,7 +408,7 @@ def main(args):
queue7.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
@@ -417,6 +417,8 @@ def main(args):
sys.stderr.write(" Unable to get src pad \n")
else:
nvanalytics_src_pad.add_probe(Gst.PadProbeType.BUFFER, nvanalytics_src_pad_buffer_probe, 0)
+ # perf callback function to print fps every 5 sec
+ GLib.timeout_add(5000, perf_data.perf_print_callback)
# List the sources
print("Now playing...")
diff --git a/apps/deepstream-opticalflow/README b/apps/deepstream-opticalflow/README
index 2d418fb..32dbb8c 100755
--- a/apps/deepstream-opticalflow/README
+++ b/apps/deepstream-opticalflow/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
diff --git a/apps/deepstream-opticalflow/deepstream-opticalflow.py b/apps/deepstream-opticalflow/deepstream-opticalflow.py
index be7233f..750a592 100755
--- a/apps/deepstream-opticalflow/deepstream-opticalflow.py
+++ b/apps/deepstream-opticalflow/deepstream-opticalflow.py
@@ -29,7 +29,7 @@
import gi
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
@@ -208,7 +208,6 @@ def main(args):
os.mkdir(folder_name)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -344,7 +343,7 @@ def main(args):
container.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
diff --git a/apps/deepstream-preprocess-test/README b/apps/deepstream-preprocess-test/README
new file mode 100644
index 0000000..9c22fa6
--- /dev/null
+++ b/apps/deepstream-preprocess-test/README
@@ -0,0 +1,70 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+Prerequisites:
+- DeepStreamSDK 6.1
+- Python 3.8
+- Gst-python
+
+To run:
+ $ python3 deepstream_preprocess-test.py -i [uri2] ... [uriN]
+e.g.
+ $ python3 deepstream_preprocess-test.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
+ $ python3 deepstream_preprocess-test.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2
+
+This document describes the sample deepstream_preprocess-test application
+
+* Use multiple sources in the pipeline.
+* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
+ supported container format, and any codec can be used as input.
+* Configure the stream-muxer to generate a batch of frames and infer on the
+ batch for better resource utilization.
+* Extract the stream metadata, which contains useful information about the
+ frames in the batched buffer.
+* Per group custom preprocessing on ROIs provided
+* Prepares raw tensor for inferencing
+* nvinfer skips preprocessing and infer from input tensor meta
+
+Note : The current config file is configured to run 12 ROI at the most. To increase the ROI count, increase the first dimension to the required number `network-input-shape`=12;3;368;640. In the current config file `config-preprocess.txt`. there are 3 ROIs per source and hence a total of 12 ROI for all four sources. The total ROI from all the sources must not exceed the first dimension specified in `network-input-shape` param
+
+Refer to the deepstream-test3 sample documentation for an example of simple
+multi-stream inference, bounding-box overlay, and rendering.
+
+This sample accepts one or more H.264/H.265 video streams as input. It creates
+a source bin for each input and connects the bins to an instance of the
+"nvstreammux" element, which forms the batch of frames.
+
+Then, "nvdspreprocess" plugin preprocessed the batched frames and prepares a raw
+tensor for inferencing, which is attached as user meta at batch level. User can
+provide custom preprocessing library having custom per group transformation
+functions and custom tensor preparation function.
+
+Then, "nvinfer" uses the preprocessed raw tensor from meta data for batched
+inferencing. The batched buffer is composited into a 2D tile array using
+"nvmultistreamtiler."
+
+The rest of the pipeline is similar to the deepstream-test3 sample.
+
+NOTE: To reuse engine files generated in previous runs, update the
+model-engine-file parameter in the nvinfer config file to an existing engine file
+
+
+NOTE:
+1. For optimal performance, set nvinfer batch-size in nvinfer config file same as
+ preprocess batch-size (network-input-shape[0]) in nvdspreprocess config file.
+2. Currently preprocessing only for primary gie has been supported.
+3. Modify config_preprocess.txt for as per use case.
diff --git a/apps/deepstream-preprocess-test/config_preprocess.txt b/apps/deepstream-preprocess-test/config_preprocess.txt
new file mode 100644
index 0000000..8ebe5aa
--- /dev/null
+++ b/apps/deepstream-preprocess-test/config_preprocess.txt
@@ -0,0 +1,65 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+target-unique-ids=1
+ # 0=NCHW, 1=NHWC, 2=CUSTOM
+network-input-order=0
+
+network-input-order=0
+processing-width=640
+processing-height=368
+scaling-buf-pool-size=6
+tensor-buf-pool-size=6
+ # tensor shape based on network-input-order
+network-input-shape=12;3;368;640
+ # 0=RGB, 1=BGR, 2=GRAY
+
+network-color-format=0
+ # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
+tensor-data-type=0
+tensor-name=input_1
+ # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
+scaling-pool-memory-type=0
+ # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
+scaling-pool-compute-hw=0
+ # Scaling Interpolation method
+ # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
+ # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
+ # 6=NvBufSurfTransformInter_Default
+scaling-filter=0
+custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
+custom-tensor-preparation-function=CustomTensorPreparation
+
+[user-configs]
+pixel-normalization-factor=0.003921568
+#mean-file=
+#offsets=
+
+
+[group-0]
+src-ids=0;1;2;3
+custom-input-transformation-function=CustomAsyncTransformation
+process-on-roi=1
+roi-params-src-0=0;540;900;500;960;0;900;500;0;0;540;900;
+roi-params-src-1=0;540;900;500;960;0;900;500;0;0;540;900;
+roi-params-src-2=0;540;900;500;960;0;900;500;0;0;540;900;
+roi-params-src-3=0;540;900;500;960;0;900;500;0;0;540;900;
diff --git a/apps/deepstream-preprocess-test/deepstream_preprocess_test.py b/apps/deepstream-preprocess-test/deepstream_preprocess_test.py
new file mode 100644
index 0000000..e06e4cd
--- /dev/null
+++ b/apps/deepstream-preprocess-test/deepstream_preprocess_test.py
@@ -0,0 +1,467 @@
+#!/usr/bin/env python3
+
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+import sys
+
+sys.path.append("../")
+from common.bus_call import bus_call
+from common.is_aarch_64 import is_aarch64
+import pyds
+import platform
+import math
+import time
+from ctypes import *
+import gi
+
+gi.require_version("Gst", "1.0")
+gi.require_version("GstRtspServer", "1.0")
+from gi.repository import Gst, GstRtspServer, GLib
+import configparser
+
+import argparse
+
+from common.FPS import PERF_DATA
+
+perf_data = None
+
+MAX_DISPLAY_LEN = 64
+PGIE_CLASS_ID_VEHICLE = 0
+PGIE_CLASS_ID_BICYCLE = 1
+PGIE_CLASS_ID_PERSON = 2
+PGIE_CLASS_ID_ROADSIGN = 3
+MUXER_OUTPUT_WIDTH = 1920
+MUXER_OUTPUT_HEIGHT = 1080
+MUXER_BATCH_TIMEOUT_USEC = 4000000
+TILED_OUTPUT_WIDTH = 1280
+TILED_OUTPUT_HEIGHT = 720
+GST_CAPS_FEATURES_NVMM = "memory:NVMM"
+OSD_PROCESS_MODE = 0
+OSD_DISPLAY_TEXT = 0
+pgie_classes_str = ["Vehicle", "TwoWheeler", "Person", "RoadSign"]
+
+# pgie_src_pad_buffer_probe will extract metadata received on tiler sink pad
+# and update params for drawing rectangle, object information etc.
+
+
+def pgie_src_pad_buffer_probe(pad, info, u_data):
+ frame_number = 0
+ num_rects = 0
+ gst_buffer = info.get_buffer()
+ if not gst_buffer:
+ print("Unable to get GstBuffer ")
+ return
+
+ # Retrieve batch metadata from the gst_buffer
+ # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
+ # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
+ batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
+ l_frame = batch_meta.frame_meta_list
+ while l_frame is not None:
+ try:
+ # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
+ # The casting is done by pyds.NvDsFrameMeta.cast()
+ # The casting also keeps ownership of the underlying memory
+ # in the C code, so the Python garbage collector will leave
+ # it alone.
+
+ frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
+
+ except StopIteration:
+ break
+
+ frame_number = frame_meta.frame_num
+ l_obj = frame_meta.obj_meta_list
+ num_rects = frame_meta.num_obj_meta
+ obj_counter = {
+ PGIE_CLASS_ID_VEHICLE: 0,
+ PGIE_CLASS_ID_PERSON: 0,
+ PGIE_CLASS_ID_BICYCLE: 0,
+ PGIE_CLASS_ID_ROADSIGN: 0,
+ }
+ while l_obj is not None:
+ try:
+ # Casting l_obj.data to pyds.NvDsObjectMeta
+ obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
+ except StopIteration:
+ break
+ obj_counter[obj_meta.class_id] += 1
+ try:
+ l_obj = l_obj.next
+ except StopIteration:
+ break
+
+ print(
+ "Frame Number=",
+ frame_number,
+ "Number of Objects=",
+ num_rects,
+ "Vehicle_count=",
+ obj_counter[PGIE_CLASS_ID_VEHICLE],
+ "Person_count=",
+ obj_counter[PGIE_CLASS_ID_PERSON],
+ )
+
+ # update frame rate through this probe
+ stream_index = "stream{0}".format(frame_meta.pad_index)
+ global perf_data
+ perf_data.update_fps(stream_index)
+
+ try:
+ l_frame = l_frame.next
+ except StopIteration:
+ break
+
+ return Gst.PadProbeReturn.OK
+
+
+def cb_newpad(decodebin, decoder_src_pad, data):
+ print("In cb_newpad\n")
+ caps = decoder_src_pad.get_current_caps()
+ gststruct = caps.get_structure(0)
+ gstname = gststruct.get_name()
+ source_bin = data
+ features = caps.get_features(0)
+
+ # Need to check if the pad created by the decodebin is for video and not
+ # audio.
+ print("gstname=", gstname)
+ if "video" in gstname:
+ # Link the decodebin pad only if decodebin has picked nvidia
+ # decoder plugin nvdec_*. We do this by checking if the pad caps contain
+ # NVMM memory features.
+ print("features=", features)
+ if features.contains("memory:NVMM"):
+ # Get the source bin ghost pad
+ bin_ghost_pad = source_bin.get_static_pad("src")
+ if not bin_ghost_pad.set_target(decoder_src_pad):
+ sys.stderr.write(
+ "Failed to link decoder src pad to source bin ghost pad\n"
+ )
+ else:
+ sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")
+
+
+def decodebin_child_added(child_proxy, Object, name, user_data):
+ print("Decodebin child added:", name, "\n")
+ if name.find("decodebin") != -1:
+ Object.connect("child-added", decodebin_child_added, user_data)
+
+
+def create_source_bin(index, uri):
+ print("Creating source bin")
+
+ # Create a source GstBin to abstract this bin's content from the rest of the
+ # pipeline
+ bin_name = f"source-bin-{index:02}"
+ print(bin_name)
+ nbin = Gst.Bin.new(bin_name)
+ if not nbin:
+ sys.stderr.write(" Unable to create source bin \n")
+
+ # Source element for reading from the uri.
+ # We will use decodebin and let it figure out the container format of the
+ # stream and the codec and plug the appropriate demux and decode plugins.
+ uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
+ if not uri_decode_bin:
+ sys.stderr.write(" Unable to create uri decode bin \n")
+ # We set the input uri to the source element
+ uri_decode_bin.set_property("uri", uri)
+ # Connect to the "pad-added" signal of the decodebin which generates a
+ # callback once a new pad for raw data has beed created by the decodebin
+ uri_decode_bin.connect("pad-added", cb_newpad, nbin)
+ uri_decode_bin.connect("child-added", decodebin_child_added, nbin)
+
+ # We need to create a ghost pad for the source bin which will act as a proxy
+ # for the video decoder src pad. The ghost pad will not have a target right
+ # now. Once the decode bin creates the video decoder and generates the
+ # cb_newpad callback, we will set the ghost pad target to the video decoder
+ # src pad.
+ Gst.Bin.add(nbin, uri_decode_bin)
+ bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
+ if not bin_pad:
+ sys.stderr.write(" Failed to add ghost pad in source bin \n")
+ return None
+ return nbin
+
+def main(args):
+ # Check input arguments
+ global perf_data
+ perf_data = PERF_DATA(len(args))
+ number_sources = len(args)
+ # Standard GStreamer initialization
+ Gst.init(None)
+
+ # Create gstreamer elements */
+ # Create Pipeline element that will form a connection of other elements
+ print("Creating Pipeline \n ")
+ pipeline = Gst.Pipeline()
+ is_live = False
+
+ if not pipeline:
+ sys.stderr.write(" Unable to create Pipeline \n")
+ print("Creating streamux \n ")
+
+ # Create nvstreammux instance to form batches from one or more sources.
+ streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
+ if not streammux:
+ sys.stderr.write(" Unable to create NvStreamMux \n")
+
+ pipeline.add(streammux)
+ for i in range(number_sources):
+ print("Creating source_bin ", i, " \n ")
+ uri_name = args[i]
+ if uri_name.find("rtsp://") == 0:
+ is_live = True
+ source_bin = create_source_bin(i, uri_name)
+ if not source_bin:
+ sys.stderr.write("Unable to create source bin \n")
+ pipeline.add(source_bin)
+ padname = f"sink_{i}"
+ sinkpad = streammux.get_request_pad(padname)
+ if not sinkpad:
+ sys.stderr.write("Unable to create sink pad bin \n")
+ srcpad = source_bin.get_static_pad("src")
+ if not srcpad:
+ sys.stderr.write("Unable to create src pad bin \n")
+ srcpad.link(sinkpad)
+ preprocess = Gst.ElementFactory.make("nvdspreprocess", "preprocess-plugin")
+ if not preprocess:
+ sys.stderr.write(" Unable to create preprocess \n")
+ print("Creating Pgie \n ")
+ pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+ if not pgie:
+ sys.stderr.write(" Unable to create pgie \n")
+ print("Creating tiler \n ")
+ tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
+ if not tiler:
+ sys.stderr.write(" Unable to create tiler \n")
+ print("Creating nvvidconv \n ")
+ nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
+ if not nvvidconv:
+ sys.stderr.write(" Unable to create nvvidconv \n")
+ print("Creating nvosd \n ")
+ nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
+ if not nvosd:
+ sys.stderr.write(" Unable to create nvosd \n")
+ nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
+ if not nvvidconv_postosd:
+ sys.stderr.write(" Unable to create nvvidconv_postosd \n")
+
+ # Create a caps filter
+ caps = Gst.ElementFactory.make("capsfilter", "filter")
+ caps.set_property(
+ "caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420")
+ )
+
+ # Make the encoder
+ if codec == "H264":
+ encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
+ print("Creating H264 Encoder")
+ elif codec == "H265":
+ encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
+ print("Creating H265 Encoder")
+ if not encoder:
+ sys.stderr.write(" Unable to create encoder")
+ encoder.set_property("bitrate", bitrate)
+ if is_aarch64():
+ encoder.set_property("preset-level", 1)
+ encoder.set_property("insert-sps-pps", 1)
+ #encoder.set_property("bufapi-version", 1)
+
+ # Make the payload-encode video into RTP packets
+ if codec == "H264":
+ rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
+ print("Creating H264 rtppay")
+ elif codec == "H265":
+ rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
+ print("Creating H265 rtppay")
+ if not rtppay:
+ sys.stderr.write(" Unable to create rtppay")
+
+ # Make the UDP sink
+ updsink_port_num = 5400
+ sink = Gst.ElementFactory.make("udpsink", "udpsink")
+ if not sink:
+ sys.stderr.write(" Unable to create udpsink")
+
+ queue1=Gst.ElementFactory.make("queue","queue1")
+ queue2=Gst.ElementFactory.make("queue","queue2")
+ queue3=Gst.ElementFactory.make("queue","queue3")
+ queue4=Gst.ElementFactory.make("queue","queue4")
+ queue5=Gst.ElementFactory.make("queue","queue5")
+ queue6=Gst.ElementFactory.make("queue","queue6")
+ queue7=Gst.ElementFactory.make("queue","queue7")
+ queue8=Gst.ElementFactory.make("queue","queue8")
+ queue9=Gst.ElementFactory.make("queue","queue9")
+ queue10=Gst.ElementFactory.make("queue","queue10")
+ pipeline.add(queue1)
+ pipeline.add(queue2)
+ pipeline.add(queue3)
+ pipeline.add(queue4)
+ pipeline.add(queue5)
+ pipeline.add(queue6)
+ pipeline.add(queue7)
+ pipeline.add(queue8)
+ pipeline.add(queue9)
+ pipeline.add(queue10)
+
+ sink.set_property("host", "224.224.255.255")
+ sink.set_property("port", updsink_port_num)
+ sink.set_property("async", False)
+ sink.set_property("sync", 1)
+
+ streammux.set_property("width", 1920)
+ streammux.set_property("height", 1080)
+ streammux.set_property("batch-size", 1)
+ streammux.set_property("batched-push-timeout", 4000000)
+ preprocess.set_property("config-file", "config_preprocess.txt")
+ pgie.set_property("config-file-path", "dstest1_pgie_config.txt")
+
+ pgie_batch_size = pgie.get_property("batch-size")
+ pgie.set_property("input-tensor-meta", True)
+ if pgie_batch_size != number_sources:
+ print(
+ "WARNING: Overriding infer-config batch-size",
+ pgie_batch_size,
+ " with number of sources ",
+ number_sources,
+ " \n",
+ )
+ pgie.set_property("batch-size", number_sources)
+
+ print("Adding elements to Pipeline \n")
+ tiler_rows = int(math.sqrt(number_sources))
+ tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
+ tiler.set_property("rows", tiler_rows)
+ tiler.set_property("columns", tiler_columns)
+ tiler.set_property("width", TILED_OUTPUT_WIDTH)
+ tiler.set_property("height", TILED_OUTPUT_HEIGHT)
+ sink.set_property("qos", 0)
+ pipeline.add(preprocess)
+ pipeline.add(pgie)
+ pipeline.add(tiler)
+ pipeline.add(nvvidconv)
+ pipeline.add(nvosd)
+ pipeline.add(nvvidconv_postosd)
+ pipeline.add(caps)
+ pipeline.add(encoder)
+ pipeline.add(rtppay)
+ pipeline.add(sink)
+
+ streammux.link(queue1)
+ queue1.link(preprocess)
+ preprocess.link(queue2)
+ queue2.link(pgie)
+ pgie.link(queue3)
+ queue3.link(tiler)
+ tiler.link(queue4)
+ queue4.link(nvvidconv)
+ nvvidconv.link(queue5)
+ queue5.link(nvosd)
+ nvosd.link(queue6)
+ queue6.link(nvvidconv_postosd)
+ nvvidconv_postosd.link(queue7)
+ queue7.link(caps)
+ caps.link(queue8)
+ queue8.link(encoder)
+ encoder.link(queue9)
+ queue9.link(rtppay)
+ rtppay.link(queue10)
+ queue10.link(sink)
+
+ # create an event loop and feed gstreamer bus mesages to it
+ loop = GLib.MainLoop()
+ bus = pipeline.get_bus()
+ bus.add_signal_watch()
+ bus.connect("message", bus_call, loop)
+
+ # Start streaming
+ rtsp_port_num = 8554
+
+ server = GstRtspServer.RTSPServer.new()
+ server.props.service = str(rtsp_port_num)
+ server.attach(None)
+
+ factory = GstRtspServer.RTSPMediaFactory.new()
+ factory.set_launch(
+ f'( udpsrc name=pay0 port={updsink_port_num} buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=(string){codec}, payload=96 " )'
+ )
+ factory.set_shared(True)
+ server.get_mount_points().add_factory("/ds-test", factory)
+
+ pgie_src_pad = pgie.get_static_pad("src")
+ if not pgie_src_pad:
+ sys.stderr.write(" Unable to get src pad \n")
+ else:
+ pgie_src_pad.add_probe(
+ Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0
+ )
+ # perf callback function to print fps every 5 sec
+ GLib.timeout_add(5000, perf_data.perf_print_callback)
+
+ print(f"\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:{rtsp_port_num}/ds-test ***\n\n")
+
+ # start play back and listen to events
+ print("Starting pipeline \n")
+ pipeline.set_state(Gst.State.PLAYING)
+ try:
+ loop.run()
+ except BaseException:
+ pass
+ # cleanup
+ pipeline.set_state(Gst.State.NULL)
+
+
+def parse_args():
+ parser = argparse.ArgumentParser(description="RTSP Output Sample Application Help ")
+ parser.add_argument(
+ "-i",
+ "--input",
+ help="Path to input H264 elementry stream",
+ nargs="+",
+ default=["a"],
+ required=True,
+ )
+ parser.add_argument(
+ "-c",
+ "--codec",
+ default="H264",
+ help="RTSP Streaming Codec H264/H265 , default=H264",
+ choices=["H264", "H265"],
+ )
+ parser.add_argument(
+ "-b", "--bitrate", default=4000000, help="Set the encoding bitrate ", type=int
+ )
+ # Check input arguments
+ if len(sys.argv) == 1:
+ parser.print_help(sys.stderr)
+ sys.exit(1)
+ args = parser.parse_args()
+ global codec
+ global bitrate
+ global stream_path
+ codec = args.codec
+ bitrate = args.bitrate
+ stream_path = args.input
+ return stream_path
+
+
+if __name__ == "__main__":
+ stream_path = parse_args()
+ sys.exit(main(stream_path))
diff --git a/apps/deepstream-preprocess-test/dstest1_pgie_config.txt b/apps/deepstream-preprocess-test/dstest1_pgie_config.txt
new file mode 100644
index 0000000..930dbfd
--- /dev/null
+++ b/apps/deepstream-preprocess-test/dstest1_pgie_config.txt
@@ -0,0 +1,76 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+# Following properties are mandatory when engine files are not specified:
+# int8-calib-file(Only in INT8)
+# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
+# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
+# ONNX: onnx-file
+#
+# Mandatory properties for detectors:
+# num-detected-classes
+#
+# Optional properties for detectors:
+# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
+# custom-lib-path,
+# parse-bbox-func-name
+#
+# Mandatory properties for classifiers:
+# classifier-threshold, is-classifier
+#
+# Optional properties for classifiers:
+# classifier-async-mode(Secondary mode only, Default=false)
+#
+# Optional properties in secondary mode:
+# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
+# input-object-min-width, input-object-min-height, input-object-max-width,
+# input-object-max-height
+#
+# Following properties are always recommended:
+# batch-size(Default=1)
+#
+# Other optional properties:
+# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
+# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
+# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
+# custom-lib-path, network-mode(Default=0 i.e FP32)
+#
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+gpu-id=0
+net-scale-factor=0.0039215697906911373
+model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
+proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
+model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
+labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
+int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
+force-implicit-batch-dim=1
+batch-size=1
+network-mode=1
+num-detected-classes=4
+interval=0
+gie-unique-id=1
+output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
+#scaling-filter=0
+#scaling-compute-hw=0
+
+[class-attrs-all]
+pre-cluster-threshold=0.2
+eps=0.2
+group-threshold=1
diff --git a/apps/deepstream-rtsp-in-rtsp-out/README b/apps/deepstream-rtsp-in-rtsp-out/README
index 3bd824c..376bf34 100755
--- a/apps/deepstream-rtsp-in-rtsp-out/README
+++ b/apps/deepstream-rtsp-in-rtsp-out/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6+
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- GstRtspServer
diff --git a/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py b/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py
index a7de46d..88fd89b 100755
--- a/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py
+++ b/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py
@@ -18,9 +18,9 @@
################################################################################
import sys
sys.path.append("../")
-import pyds
from common.bus_call import bus_call
from common.is_aarch_64 import is_aarch64
+import pyds
import platform
import math
import time
@@ -28,15 +28,11 @@
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
-from gi.repository import GObject, Gst, GstRtspServer, GLib
+from gi.repository import Gst, GstRtspServer, GLib
import configparser
import argparse
-from common.FPS import GETFPS
-
-fps_streams = {}
-
MAX_DISPLAY_LEN = 64
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
@@ -112,8 +108,6 @@ def tiler_src_pad_buffer_probe(pad, info, u_data):
obj_counter[PGIE_CLASS_ID_PERSON],
)
- # Get frame rate through this probe
- fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
try:
l_frame = l_frame.next
except StopIteration:
@@ -197,12 +191,9 @@ def create_source_bin(index, uri):
def main(args):
# Check input arguments
- for i in range(0, len(args)):
- fps_streams["stream{0}".format(i)] = GETFPS(i)
number_sources = len(args)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -282,7 +273,7 @@ def main(args):
if is_aarch64():
encoder.set_property("preset-level", 1)
encoder.set_property("insert-sps-pps", 1)
- encoder.set_property("bufapi-version", 1)
+ #encoder.set_property("bufapi-version", 1)
# Make the payload-encode video into RTP packets
if codec == "H264":
@@ -357,7 +348,7 @@ def main(args):
rtppay.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
diff --git a/apps/deepstream-segmentation/README b/apps/deepstream-segmentation/README
index a9ae863..a3068b9 100644
--- a/apps/deepstream-segmentation/README
+++ b/apps/deepstream-segmentation/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- NumPy package
- OpenCV package
@@ -26,6 +26,9 @@ To install required packages:
$ sudo apt update
$ sudo apt install python3-numpy python3-opencv -y
+If on Jetson, the libgomp.so.1 must be added to LD_PRELOAD:
+ $ export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
+
To run:
$ python3 deepstream_segmentation.py
diff --git a/apps/deepstream-segmentation/deepstream_segmentation.py b/apps/deepstream-segmentation/deepstream_segmentation.py
index 27f1ea5..cc0b92d 100755
--- a/apps/deepstream-segmentation/deepstream_segmentation.py
+++ b/apps/deepstream-segmentation/deepstream_segmentation.py
@@ -24,7 +24,7 @@
import math
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
import cv2
@@ -141,7 +141,6 @@ def main(args):
config_file = args[1]
num_sources = len(args) - 3
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -246,7 +245,7 @@ def main(args):
else:
nvsegvisual.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
diff --git a/apps/deepstream-ssd-parser/README b/apps/deepstream-ssd-parser/README
index 79d8ece..a05631b 100644
--- a/apps/deepstream-ssd-parser/README
+++ b/apps/deepstream-ssd-parser/README
@@ -16,9 +16,9 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
+- DeepStreamSDK 6.1
- NVIDIA Triton Inference Server
-- Python 3.6
+- Python 3.8
- Gst-python
- NumPy
diff --git a/apps/deepstream-ssd-parser/deepstream_ssd_parser.py b/apps/deepstream-ssd-parser/deepstream_ssd_parser.py
index f231d30..8f1db26 100755
--- a/apps/deepstream-ssd-parser/deepstream_ssd_parser.py
+++ b/apps/deepstream-ssd-parser/deepstream_ssd_parser.py
@@ -24,7 +24,7 @@
sys.path.append("../")
import gi
gi.require_version("Gst", "1.0")
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
from ssd_parser import nvds_infer_parse_custom_tf_ssd, DetectionParam, NmsParam, BoxSizeParam
@@ -288,6 +288,8 @@ def pgie_src_pad_buffer_probe(pad, info, u_data):
add_obj_meta_to_frame(frame_object, batch_meta, frame_meta, label_names)
try:
+ # indicate inference is performed on the frame
+ frame_meta.bInferDone = True
l_frame = l_frame.next
except StopIteration:
break
@@ -301,7 +303,6 @@ def main(args):
sys.exit(1)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -417,7 +418,7 @@ def main(args):
container.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
diff --git a/apps/deepstream-test1-rtsp-out/README b/apps/deepstream-test1-rtsp-out/README
index 1c01dee..37fa464 100644
--- a/apps/deepstream-test1-rtsp-out/README
+++ b/apps/deepstream-test1-rtsp-out/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
- GstRtspServer
diff --git a/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py b/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py
index ccd5b25..462c4f1 100755
--- a/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py
+++ b/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py
@@ -24,7 +24,7 @@
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
-from gi.repository import GObject, Gst, GstRtspServer
+from gi.repository import GLib, Gst, GstRtspServer
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
@@ -122,7 +122,6 @@ def osd_sink_pad_buffer_probe(pad,info,u_data):
def main(args):
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -192,7 +191,7 @@ def main(args):
if is_aarch64():
encoder.set_property('preset-level', 1)
encoder.set_property('insert-sps-pps', 1)
- encoder.set_property('bufapi-version', 1)
+ #encoder.set_property('bufapi-version', 1)
# Make the payload-encode video into RTP packets
if codec == "H264":
@@ -265,7 +264,7 @@ def main(args):
rtppay.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
diff --git a/apps/deepstream-test1-usbcam/README b/apps/deepstream-test1-usbcam/README
index d8b037d..01aa793 100644
--- a/apps/deepstream-test1-usbcam/README
+++ b/apps/deepstream-test1-usbcam/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
To run the test app:
diff --git a/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py b/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
index 39cf842..83fcda8 100755
--- a/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
+++ b/apps/deepstream-test1-usbcam/deepstream_test_1_usb.py
@@ -21,7 +21,7 @@
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
@@ -125,7 +125,6 @@ def main(args):
sys.exit(1)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -256,7 +255,7 @@ def main(args):
nvosd.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
diff --git a/apps/deepstream-test1/README b/apps/deepstream-test1/README
index bdd4571..f6bf17b 100644
--- a/apps/deepstream-test1/README
+++ b/apps/deepstream-test1/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
To run the test app:
diff --git a/apps/deepstream-test1/deepstream_test_1.py b/apps/deepstream-test1/deepstream_test_1.py
index 67be84b..8d0e9e4 100755
--- a/apps/deepstream-test1/deepstream_test_1.py
+++ b/apps/deepstream-test1/deepstream_test_1.py
@@ -21,7 +21,7 @@
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
@@ -128,7 +128,6 @@ def main(args):
sys.exit(1)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -233,7 +232,7 @@ def main(args):
nvosd.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
diff --git a/apps/deepstream-test2/README b/apps/deepstream-test2/README
index c37179e..8055b7e 100644
--- a/apps/deepstream-test2/README
+++ b/apps/deepstream-test2/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
To run the test app:
diff --git a/apps/deepstream-test2/deepstream_test_2.py b/apps/deepstream-test2/deepstream_test_2.py
index 91970fa..8c686ea 100755
--- a/apps/deepstream-test2/deepstream_test_2.py
+++ b/apps/deepstream-test2/deepstream_test_2.py
@@ -24,7 +24,7 @@
import gi
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
@@ -170,7 +170,6 @@ def main(args):
# Standard GStreamer initialization
if(len(args)==3):
past_tracking_meta[0]=int(args[2])
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements
@@ -333,7 +332,7 @@ def main(args):
# create and event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
diff --git a/apps/deepstream-test3/README b/apps/deepstream-test3/README
index 6decb48..14ac8c4 100755
--- a/apps/deepstream-test3/README
+++ b/apps/deepstream-test3/README
@@ -16,27 +16,132 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- NVIDIA Triton Inference Server (optional)
+- Python 3.8
- Gst-python
+To set up Triton Inference Server: (optional)
+For x86_64 and Jetson Docker:
+ 1. Use the provided docker container and follow directions for
+ Triton Inference Server in the SDK README --
+ be sure to prepare the detector models.
+ 2. Run the docker with this Python Bindings directory mapped
+ 3. Install required Python packages inside the container:
+ $ apt update
+ $ apt install python3-gi python3-dev python3-gst-1.0 -y
+ $ pip3 install pathlib
+ 4. Build and install pyds bindings:
+ Follow the instructions in bindings README in this repo to build and install
+ pyds wheel for Ubuntu 20.04
+ 5. For Triton gRPC setup, please follow the instructions at below location:
+ /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc/README
+
+For Jetson without Docker:
+ 1. Follow instructions in the DeepStream SDK README to set up
+ Triton Inference Server:
+ 2.1 Compile and install the nvdsinfer_customparser
+ 2.2 Prepare at least the Triton detector models
+ 2. Build and install pyds bindings:
+ Follow the instructions in bindings README in this repo to build and install
+ pyds wheel for Ubuntu 20.04
+ 3. Clear the GStreamer cache if pipeline creation fails:
+ rm ~/.cache/gstreamer-1.0/*
+ 4. For Triton gRPC setup, please follow the instructions at below location:
+ /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc/README
+
+To setup peoplenet model and configs (optional):
+Please follow instructions in the README located here : /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README
+
+Also follow these instructions for multi-stream Triton support (optional):
+ 1. Update the max_batch_size in config_triton_infer_primary_peoplenet.txt to the maximum expected number of streams
+ 2. Regenerate engine file for peoplenet as described below using deepstream-app:
+ a. cd to /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models
+
+ b. Edit "primary-pgie" section in the deepstream_app_source1_peoplenet.txt file to reflect below:
+ "
+ enable=1
+ plugin-type=0
+ model-engine-file=../../models/tao_pretrained_models/peopleNet/V2.5/<.engine file>
+ batch-size=
+ config-file=config_infer_primary_peoplenet.txt
+ "
+ c. Make sure that you make corresponding changes in the config_infer_primary_peoplenet.txt file in above dir.
+ For ex.
+ "
+ tlt-model-key=tlt_encode
+ tlt-encoded-model=../../models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt
+ labelfile-path=../../models/tao_pretrained_models/peopleNet/V2.5/labels.txt
+ model-engine-file=../../models/tao_pretrained_models/peopleNet/V2.5/<.engine file>
+ int8-calib-file=../../models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.txt
+ batch-size=16
+ "
+ d. While inside the dir /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/ , run the deepstream-app
+ as follows:
+ deepstream-app -c deepstream_app_source1_peoplenet.txt
+
+ This would generate the engine file required for the next step.
+
+ e. Create the following dir if not present:
+ sudo mkdir -p /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/peoplenet/1/
+
+ f. Copy engine file from dir /opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/peopleNet/V2.5/
+ to
+ /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/peoplenet/1/
+
+ g. Copy file config.pbtxt from deepstream-test3 dir to /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/peoplenet/ dir
+
+ h. cd to /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/peoplenet and make sure that config.pbtxt
+ has the correct "max_batch_size" set along with "default_model_filename" set to the newly moved engine file
+
+Note: For gRPC case, grpc url according to the grpc server configuration and make sure that the labelfile_path points to the
+ correct/expected labelfile
+
+
To run:
- $ python3 deepstream_test_3.py [uri2] ... [uriN]
+ $ python3 deepstream_test_3.py -i [uri2] ... [uriN] [--no-display] [--silent]
e.g.
- $ python3 deepstream_test_3.py file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
- $ python3 deepstream_test_3.py rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2
+ $ python3 deepstream_test_3.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
+ $ python3 deepstream_test_3.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 -s
+
+To run peoplenet, test3 now supports 3 modes:
+
+ 1. nvinfer + peoplenet: this mode still uses TRT for inferencing.
+
+ $ python3 deepstream_test_3.py -i [uri2] ... [uriN] --pgie nvinfer -c [--no-display] [--silent]
+
+ 2. nvinferserver + peoplenet : this mode uses Triton for inferencing.
+
+ $ python3 deepstream_test_3.py -i [uri2] ... [uriN] --pgie nvinferserver -c [--no-display] [-s]
+
+ 3. nvinferserver (gRPC) + peoplenet : this mode uses Triton gRPC for inferencing.
+
+ $ python3 deepstream_test_3.py -i [uri2] ... [uriN] --pgie nvinferserver-grpc -c [--no-display] [--silent]
+
+e.g.
+ $ python3 deepstream_test_3.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4 --pgie nvinfer -c config_infer_primary_peoplenet.tx --no-display --silent
+ $ python3 deepstream_test_3.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 --pgie nvinferserver -c config_triton_infer_primary_peoplenet.txt -s
+ $ python3 deepstream_test_3.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 --pgie nvinferserver-grpc -c config_triton_grpc_infer_primary_peoplenet.txt --no-display --silent
+
+Note:
+1) if --pgie is not specified, test3 uses nvinfer and default model, not peoplenet.
+2) Both --pgie and -c need to be provided for custom models.
+3) Configs other than peoplenet can also be provided using the above approach.
+4) --no-display option disables on-screen video display.
+5) -s/--silent option can be used to suppress verbose output.
+6) --file-loop option can be used to loop input files after EOS.
+7) --disable-probe option can be used to disable the probe function and to use nvdslogger for perf measurements.
This document describes the sample deepstream-test3 application.
-This sample builds on top of the deepstream-test1 sample to demonstrate how to:
-* Use multiple sources in the pipeline.
-* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
- supported container format, and any codec can be used as input.
-* Configure the stream-muxer to generate a batch of frames and infer on the
- batch for better resource utilization.
-* Extract the stream metadata, which contains useful information about the
- frames in the batched buffer.
+ * Use multiple sources in the pipeline.
+ * Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
+ supported container format, and any codec can be used as input.
+ * Configure the stream-muxer to generate a batch of frames and infer on the
+ batch for better resource utilization.
+ * Extract the stream metadata, which contains useful information about the
+ frames in the batched buffer.
Refer to the deepstream-test1 sample documentation for an example of simple
single-stream inference, bounding-box overlay, and rendering.
diff --git a/apps/deepstream-test3/config.pbtxt b/apps/deepstream-test3/config.pbtxt
new file mode 100644
index 0000000..a1acbda
--- /dev/null
+++ b/apps/deepstream-test3/config.pbtxt
@@ -0,0 +1,50 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+
+name: "peoplenet"
+platform: "tensorrt_plan"
+max_batch_size: 1
+default_model_filename: "resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine"
+input [
+ {
+ name: "input_1"
+ data_type: TYPE_FP32
+ format: FORMAT_NCHW
+ dims: [ 3, 544, 960 ]
+ }
+]
+output [
+ {
+ name: "output_bbox/BiasAdd"
+ data_type: TYPE_FP32
+ dims: [ 12, 34, 60 ]
+ },
+
+ {
+ name: "output_cov/Sigmoid"
+ data_type: TYPE_FP32
+ dims: [ 3, 34, 60 ]
+ }
+]
+instance_group [
+ {
+ kind: KIND_GPU
+ count: 1
+ gpus: 0
+ }
+]
diff --git a/apps/deepstream-test3/config_infer_primary_peoplenet.txt b/apps/deepstream-test3/config_infer_primary_peoplenet.txt
new file mode 100644
index 0000000..67bc007
--- /dev/null
+++ b/apps/deepstream-test3/config_infer_primary_peoplenet.txt
@@ -0,0 +1,62 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+[property]
+gpu-id=0
+net-scale-factor=0.0039215697906911373
+tlt-model-key=tlt_encode
+tlt-encoded-model=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt
+labelfile-path=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/labels.txt
+model-engine-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
+int8-calib-file=../../../../samples/models/tao_pretrained_models/peopleNet/V2.5/resnet34_peoplenet_int8.txt
+input-dims=3;544;960;0
+uff-input-blob-name=input_1
+batch-size=1
+process-mode=1
+model-color-format=0
+## 0=FP32, 1=INT8, 2=FP16 mode
+network-mode=1
+num-detected-classes=3
+cluster-mode=2
+interval=0
+gie-unique-id=1
+output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
+
+#Use the config params below for dbscan clustering mode
+#[class-attrs-all]
+#detected-min-w=4
+#detected-min-h=4
+#minBoxes=3
+#eps=0.7
+
+#Use the config params below for NMS clustering mode
+[class-attrs-all]
+topk=20
+nms-iou-threshold=0.5
+pre-cluster-threshold=0.4
+
+## Per class configurations
+#[class-attrs-0]
+#topk=20
+#nms-iou-threshold=0.5
+#pre-cluster-threshold=0.4
+
+[class-attrs-1]
+#disable bag detection
+pre-cluster-threshold=1.0
+#eps=0.7
+#dbscan-min-score=0.5
diff --git a/apps/deepstream-test3/config_triton_grpc_infer_primary_peoplenet.txt b/apps/deepstream-test3/config_triton_grpc_infer_primary_peoplenet.txt
new file mode 100644
index 0000000..1eeb183
--- /dev/null
+++ b/apps/deepstream-test3/config_triton_grpc_infer_primary_peoplenet.txt
@@ -0,0 +1,70 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+infer_config {
+ unique_id: 1
+ gpu_ids: [0]
+ max_batch_size: 1
+ backend {
+ triton {
+ model_name: "peoplenet"
+ version: -1
+ grpc {
+ url: "0.0.0.0:8001"
+ }
+ }
+ }
+
+ preprocess {
+ network_format: MEDIA_FORMAT_NONE
+ tensor_order: TENSOR_ORDER_LINEAR
+ tensor_name: "input_1"
+ maintain_aspect_ratio: 0
+ frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
+ frame_scaling_filter: 1
+ normalize {
+ scale_factor: 0.0039215697906911373
+ channel_offsets: [0, 0, 0]
+ }
+ }
+
+ postprocess {
+ labelfile_path: "/opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/peopleNet/V2.5/labels.txt"
+ detection {
+ num_detected_classes: 3
+ per_class_params {
+ key: 0
+ value { pre_threshold: 0.4 }
+ }
+ nms {
+ confidence_threshold:0.2
+ topk:20
+ iou_threshold:0.5
+ }
+ }
+ }
+
+ extra {
+ copy_input_to_host_buffers: false
+ output_buffer_pool_size: 2
+ }
+}
+input_control {
+ process_mode: PROCESS_MODE_FULL_FRAME
+ operate_on_gie_id: -1
+ interval: 0
+}
diff --git a/apps/deepstream-test3/config_triton_infer_primary_peoplenet.txt b/apps/deepstream-test3/config_triton_infer_primary_peoplenet.txt
new file mode 100644
index 0000000..5cb2a3f
--- /dev/null
+++ b/apps/deepstream-test3/config_triton_infer_primary_peoplenet.txt
@@ -0,0 +1,71 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+infer_config {
+ unique_id: 1
+ gpu_ids: [0]
+ max_batch_size: 1
+ backend {
+ triton {
+ model_name: "peoplenet"
+ version: -1
+ model_repo {
+ root: "/opt/nvidia/deepstream/deepstream/samples/triton_model_repo"
+ strict_model_config: true
+ }
+ }
+ }
+
+ preprocess {
+ network_format: MEDIA_FORMAT_NONE
+ tensor_order: TENSOR_ORDER_LINEAR
+ tensor_name: "input_1"
+ maintain_aspect_ratio: 0
+ frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
+ frame_scaling_filter: 1
+ normalize {
+ scale_factor: 0.0039215697906911373
+ channel_offsets: [0, 0, 0]
+ }
+ }
+
+ postprocess {
+ labelfile_path: "/opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/peopleNet/V2.5/labels.txt"
+ detection {
+ num_detected_classes: 3
+ per_class_params {
+ key: 0
+ value { pre_threshold: 0.4 }
+ }
+ nms {
+ confidence_threshold:0.2
+ topk:20
+ iou_threshold:0.5
+ }
+ }
+ }
+
+ extra {
+ copy_input_to_host_buffers: false
+ output_buffer_pool_size: 2
+ }
+}
+input_control {
+ process_mode: PROCESS_MODE_FULL_FRAME
+ operate_on_gie_id: -1
+ interval: 0
+}
diff --git a/apps/deepstream-test3/deepstream_test_3.py b/apps/deepstream-test3/deepstream_test_3.py
index 3e4b298..edce707 100755
--- a/apps/deepstream-test3/deepstream_test_3.py
+++ b/apps/deepstream-test3/deepstream_test_3.py
@@ -1,7 +1,7 @@
#!/usr/bin/env python3
################################################################################
-# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -19,11 +19,12 @@
import sys
sys.path.append('../')
+from pathlib import Path
import gi
import configparser
+import argparse
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
-from gi.repository import GLib
+from gi.repository import GLib, Gst
from ctypes import *
import time
import sys
@@ -31,11 +32,14 @@
import platform
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
-from common.FPS import GETFPS
+from common.FPS import PERF_DATA
import pyds
-fps_streams={}
+no_display = False
+silent = False
+file_loop = False
+perf_data = None
MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_VEHICLE = 0
@@ -52,16 +56,16 @@
OSD_DISPLAY_TEXT= 1
pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]
-# tiler_sink_pad_buffer_probe will extract metadata received on OSD sink pad
+# pgie_src_pad_buffer_probe will extract metadata received on tiler sink pad
# and update params for drawing rectangle, object information etc.
-def tiler_src_pad_buffer_probe(pad,info,u_data):
+def pgie_src_pad_buffer_probe(pad,info,u_data):
frame_number=0
num_rects=0
+ got_fps = False
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
-
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
@@ -98,10 +102,14 @@ def tiler_src_pad_buffer_probe(pad,info,u_data):
l_obj=l_obj.next
except StopIteration:
break
- print("Frame Number=", frame_number, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
+ if not silent:
+ print("Frame Number=", frame_number, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
+
+ # Update frame rate through this probe
+ stream_index = "stream{0}".format(frame_meta.pad_index)
+ global perf_data
+ perf_data.update_fps(stream_index)
- # Get frame rate through this probe
- fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
try:
l_frame=l_frame.next
except StopIteration:
@@ -114,6 +122,8 @@ def tiler_src_pad_buffer_probe(pad,info,u_data):
def cb_newpad(decodebin, decoder_src_pad,data):
print("In cb_newpad\n")
caps=decoder_src_pad.get_current_caps()
+ if not caps:
+ caps = decoder_src_pad.query_caps()
gststruct=caps.get_structure(0)
gstname=gststruct.get_name()
source_bin=data
@@ -141,7 +151,10 @@ def decodebin_child_added(child_proxy,Object,name,user_data):
Object.connect("child-added",decodebin_child_added,user_data)
if "source" in name:
- Object.set_property("drop-on-latency", True)
+ source_element = child_proxy.get_by_name("source")
+ if source_element.find_property('drop-on-latency') != None:
+ Object.set_property("drop-on-latency", True)
+
def create_source_bin(index,uri):
@@ -158,7 +171,12 @@ def create_source_bin(index,uri):
# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
- uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
+ if file_loop:
+ # use nvurisrcbin to enable file-loop
+ uri_decode_bin=Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
+ uri_decode_bin.set_property("file-loop", 1)
+ else:
+ uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
if not uri_decode_bin:
sys.stderr.write(" Unable to create uri decode bin \n")
# We set the input uri to the source element
@@ -180,18 +198,13 @@ def create_source_bin(index,uri):
return None
return nbin
-def main(args):
- # Check input arguments
- if len(args) < 2:
- sys.stderr.write("usage: %s [uri2] ... [uriN]\n" % args[0])
- sys.exit(1)
+def main(args, requested_pgie=None, config=None, disable_probe=False):
+ global perf_data
+ perf_data = PERF_DATA(len(args))
- for i in range(0,len(args)-1):
- fps_streams["stream{0}".format(i)]=GETFPS(i)
- number_sources=len(args)-1
+ number_sources=len(args)
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -212,7 +225,7 @@ def main(args):
pipeline.add(streammux)
for i in range(number_sources):
print("Creating source_bin ",i," \n ")
- uri_name=args[i+1]
+ uri_name=args[i]
if uri_name.find("rtsp://") == 0 :
is_live = True
source_bin=create_source_bin(i, uri_name)
@@ -237,10 +250,26 @@ def main(args):
pipeline.add(queue3)
pipeline.add(queue4)
pipeline.add(queue5)
+
+ nvdslogger = None
+ transform = None
+
print("Creating Pgie \n ")
- pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+ if requested_pgie != None and (requested_pgie == 'nvinferserver' or requested_pgie == 'nvinferserver-grpc') :
+ pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
+ elif requested_pgie != None and requested_pgie == 'nvinfer':
+ pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+ else:
+ pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+
if not pgie:
- sys.stderr.write(" Unable to create pgie \n")
+ sys.stderr.write(" Unable to create pgie : %s\n" % requested_pgie)
+
+ if disable_probe:
+ # Use nvdslogger for perf measurement instead of probe function
+ print ("Creating nvdslogger \n")
+ nvdslogger = Gst.ElementFactory.make("nvdslogger", "nvdslogger")
+
print("Creating tiler \n ")
tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
if not tiler:
@@ -255,26 +284,41 @@ def main(args):
sys.stderr.write(" Unable to create nvosd \n")
nvosd.set_property('process-mode',OSD_PROCESS_MODE)
nvosd.set_property('display-text',OSD_DISPLAY_TEXT)
- if(is_aarch64()):
- print("Creating transform \n ")
- transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
- if not transform:
- sys.stderr.write(" Unable to create transform \n")
-
- print("Creating EGLSink \n")
- sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+
+
+ if no_display:
+ print("Creating Fakesink \n")
+ sink = Gst.ElementFactory.make("fakesink", "fakesink")
+ sink.set_property('enable-last-sample', 0)
+ sink.set_property('sync', 0)
+ else:
+ if(is_aarch64()):
+ print("Creating transform \n ")
+ transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
+ if not transform:
+ sys.stderr.write(" Unable to create transform \n")
+ print("Creating EGLSink \n")
+ sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+
if not sink:
- sys.stderr.write(" Unable to create egl sink \n")
+ sys.stderr.write(" Unable to create sink element \n")
if is_live:
- print("Atleast one of the sources is live")
+ print("At least one of the sources is live")
streammux.set_property('live-source', 1)
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', number_sources)
streammux.set_property('batched-push-timeout', 4000000)
- pgie.set_property('config-file-path', "dstest3_pgie_config.txt")
+ if requested_pgie == "nvinferserver" and config != None:
+ pgie.set_property('config-file-path', config)
+ elif requested_pgie == "nvinferserver-grpc" and config != None:
+ pgie.set_property('config-file-path', config)
+ elif requested_pgie == "nvinfer" and config != None:
+ pgie.set_property('config-file-path', config)
+ else:
+ pgie.set_property('config-file-path', "dstest3_pgie_config.txt")
pgie_batch_size=pgie.get_property("batch-size")
if(pgie_batch_size != number_sources):
print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
@@ -289,10 +333,12 @@ def main(args):
print("Adding elements to Pipeline \n")
pipeline.add(pgie)
+ if nvdslogger:
+ pipeline.add(nvdslogger)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
- if is_aarch64():
+ if transform:
pipeline.add(transform)
pipeline.add(sink)
@@ -300,12 +346,16 @@ def main(args):
streammux.link(queue1)
queue1.link(pgie)
pgie.link(queue2)
- queue2.link(tiler)
+ if nvdslogger:
+ queue2.link(nvdslogger)
+ nvdslogger.link(tiler)
+ else:
+ queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
- if is_aarch64():
+ if transform:
nvosd.link(queue5)
queue5.link(transform)
transform.link(sink)
@@ -314,21 +364,23 @@ def main(args):
queue5.link(sink)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
- tiler_src_pad=pgie.get_static_pad("src")
- if not tiler_src_pad:
+ pgie_src_pad=pgie.get_static_pad("src")
+ if not pgie_src_pad:
sys.stderr.write(" Unable to get src pad \n")
else:
- tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)
+ if not disable_probe:
+ pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
+ # perf callback function to print fps every 5 sec
+ GLib.timeout_add(5000, perf_data.perf_print_callback)
# List the sources
print("Now playing...")
for i, source in enumerate(args):
- if (i != 0):
- print(i, ": ", source)
+ print(i, ": ", source)
print("Starting pipeline \n")
# start play back and listed to events
@@ -341,7 +393,93 @@ def main(args):
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)
-if __name__ == '__main__':
- sys.exit(main(sys.argv))
+def parse_args():
+
+ parser = argparse.ArgumentParser(prog="deepstream_test_3",
+ description="deepstream-test3 multi stream, multi model inference reference app")
+ parser.add_argument(
+ "-i",
+ "--input",
+ help="Path to input streams",
+ nargs="+",
+ metavar="URIs",
+ default=["a"],
+ required=True,
+ )
+ parser.add_argument(
+ "-c",
+ "--configfile",
+ metavar="config_location.txt",
+ default=None,
+ help="Choose the config-file to be used with specified pgie",
+ )
+ parser.add_argument(
+ "-g",
+ "--pgie",
+ default=None,
+ help="Choose Primary GPU Inference Engine",
+ choices=["nvinfer", "nvinferserver", "nvinferserver-grpc"],
+ )
+ parser.add_argument(
+ "--no-display",
+ action="store_true",
+ default=False,
+ dest='no_display',
+ help="Disable display of video output",
+ )
+ parser.add_argument(
+ "--file-loop",
+ action="store_true",
+ default=False,
+ dest='file_loop',
+ help="Loop the input file sources after EOS",
+ )
+ parser.add_argument(
+ "--disable-probe",
+ action="store_true",
+ default=False,
+ dest='disable_probe',
+ help="Disable the probe function and use nvdslogger for FPS",
+ )
+ parser.add_argument(
+ "-s",
+ "--silent",
+ action="store_true",
+ default=False,
+ dest='silent',
+ help="Disable verbose output",
+ )
+ # Check input arguments
+ if len(sys.argv) == 1:
+ parser.print_help(sys.stderr)
+ sys.exit(1)
+ args = parser.parse_args()
+
+ stream_paths = args.input
+ pgie = args.pgie
+ config = args.configfile
+ disable_probe = args.disable_probe
+ global no_display
+ global silent
+ global file_loop
+ no_display = args.no_display
+ silent = args.silent
+ file_loop = args.file_loop
+
+ if config and not pgie or pgie and not config:
+ sys.stderr.write ("\nEither pgie or configfile is missing. Please specify both! Exiting...\n\n\n\n")
+ parser.print_help()
+ sys.exit(1)
+ if config:
+ config_path = Path(config)
+ if not config_path.is_file():
+ sys.stderr.write ("Specified config-file: %s doesn't exist. Exiting...\n\n" % config)
+ sys.exit(1)
+
+ print(vars(args))
+ return stream_paths, pgie, config, disable_probe
+if __name__ == '__main__':
+ stream_paths, pgie, config, disable_probe = parse_args()
+ sys.exit(main(stream_paths, pgie, config, disable_probe))
diff --git a/apps/deepstream-test4/README b/apps/deepstream-test4/README
index 874256e..76c3586 100755
--- a/apps/deepstream-test4/README
+++ b/apps/deepstream-test4/README
@@ -16,8 +16,8 @@
################################################################################
Prerequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
#Deepstream msgbroker supports sending messages to Azure(mqtt) IOThub, kafka and AMQP broker(rabbitmq)
diff --git a/apps/deepstream-test4/deepstream_test_4.py b/apps/deepstream-test4/deepstream_test_4.py
index aa0b5a6..d50450a 100755
--- a/apps/deepstream-test4/deepstream_test_4.py
+++ b/apps/deepstream-test4/deepstream_test_4.py
@@ -23,7 +23,7 @@
import gi
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import GLib, Gst
import sys
from optparse import OptionParser
from common.is_aarch_64 import is_aarch64
@@ -327,7 +327,6 @@ def osd_sink_pad_buffer_probe(pad, info, u_data):
def main(args):
- GObject.threads_init()
Gst.init(None)
# registering callbacks
@@ -474,7 +473,7 @@ def main(args):
tee_render_pad.link(sink_pad)
# create an event loop and feed gstreamer bus messages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
diff --git a/apps/runtime_source_add_delete/README b/apps/runtime_source_add_delete/README
index 73edacc..fd27327 100644
--- a/apps/runtime_source_add_delete/README
+++ b/apps/runtime_source_add_delete/README
@@ -16,8 +16,8 @@
################################################################################
Prequisites:
-- DeepStreamSDK 6.0.1
-- Python 3.6
+- DeepStreamSDK 6.1
+- Python 3.8
- Gst-python
To run the test app:
diff --git a/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py b/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py
index fbee687..62cb715 100644
--- a/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py
+++ b/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py
@@ -22,7 +22,7 @@
import gi
import configparser
gi.require_version('Gst', '1.0')
-from gi.repository import GObject, Gst
+from gi.repository import Gst, GLib
from gi.repository import GLib
from ctypes import *
import time
@@ -275,7 +275,7 @@ def add_sources(data):
#If reached the maximum number of sources, delete sources every 10 seconds
if (g_num_sources == MAX_NUM_SOURCES):
- GObject.timeout_add_seconds(10, delete_sources, g_source_bin_list)
+ GLib.timeout_add_seconds(10, delete_sources, g_source_bin_list)
return False
return True
@@ -329,7 +329,6 @@ def main(args):
num_sources=len(args)-1
# Standard GStreamer initialization
- GObject.threads_init()
Gst.init(None)
# Create gstreamer elements */
@@ -521,7 +520,7 @@ def main(args):
sink.set_property("qos",0)
# create an event loop and feed gstreamer bus mesages to it
- loop = GObject.MainLoop()
+ loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
@@ -538,7 +537,7 @@ def main(args):
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
- GObject.timeout_add_seconds(10, add_sources, g_source_bin_list)
+ GLib.timeout_add_seconds(10, add_sources, g_source_bin_list)
try:
loop.run()
diff --git a/bindings/CMakeLists.txt b/bindings/CMakeLists.txt
index 58c3865..089cc22 100644
--- a/bindings/CMakeLists.txt
+++ b/bindings/CMakeLists.txt
@@ -22,9 +22,9 @@ function(check_variable_set variable_name default_value)
endif()
endfunction()
-check_variable_set(DS_VERSION 6.0)
+check_variable_set(DS_VERSION 6.1)
check_variable_set(PYTHON_MAJOR_VERSION 3)
-check_variable_set(PYTHON_MINOR_VERSION 6)
+check_variable_set(PYTHON_MINOR_VERSION 8)
check_variable_set(PIP_PLATFORM linux_x86_64)
check_variable_set(DS_PATH "/opt/nvidia/deepstream/deepstream-${DS_VERSION}")
@@ -50,7 +50,7 @@ set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--no-undefined")
# Setting python build versions
set(PYTHON_VERSION ${PYTHON_MAJOR_VERSION}.${PYTHON_MINOR_VERSION})
-set(PIP_WHEEL pyds-1.1.1-py3-none-${PIP_PLATFORM}.whl)
+set(PIP_WHEEL pyds-1.1.2-py3-none-${PIP_PLATFORM}.whl)
# Describing pyds build
project(pyds DESCRIPTION "Python bindings for Deepstream")
diff --git a/bindings/README.md b/bindings/README.md
index af81229..4aa7577 100644
--- a/bindings/README.md
+++ b/bindings/README.md
@@ -51,17 +51,17 @@ Go to https://developer.nvidia.com/deepstream-sdk, download and install Deepstre
cd deepstream_python_apps/bindings
mkdir build
cd build
-cmake ..
+cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=6
make
```
-### 3.1.1 Quick build (x86-ubuntu-20.04 | python 3.8 | Deepstream 6.0.1)
+### 3.1.1 Quick build (x86-ubuntu-20.04 | python 3.8 | Deepstream 6.1)
```
cd deepstream_python_apps/bindings
mkdir build
cd build
-cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8
+cmake ..
make
```
@@ -78,9 +78,9 @@ cmake [-D= [-D= [-D= ... ]]]
| Var | Default value | Purpose | Available values
|-----|:-------------:|---------|:----------------:
-| DS_VERSION | 6.0.1 | Used to determine default deepstream library path | should match to the deepstream version installed on your computer
+| DS_VERSION | 6.1 | Used to determine default deepstream library path | should match to the deepstream version installed on your computer
| PYTHON_MAJOR_VERSION | 3 | Used to set the python version used for the bindings | 3
-| PYTHON_MINOR_VERSION | 6 | Used to set the python version used for the bindings | 6, 8
+| PYTHON_MINOR_VERSION | 8 | Used to set the python version used for the bindings | 6, 8
| PIP_PLATFORM | linux_x86_64 | Used to select the target architecture to compile the bindings | linux_x86_64, linux_aarch64
| DS_PATH | /opt/nvidia/deepstream/deepstream-${DS_VERSION} | Path where deepstream libraries are available | Should match the existing deepstream library folder
@@ -92,8 +92,8 @@ Following commands can be used to build the bindings natively on Jetson devices
cd deepstream_python_apps/bindings
mkdir build
cd build
-cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=6 \
- -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.0/
+cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 \
+ -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream/
make
```
@@ -115,14 +115,14 @@ sudo apt-get install qemu binfmt-support qemu-user-static
docker run --rm --privileged dockerhub.nvidia.com/multiarch/qemu-user-static --reset -p yes
# Verify qemu installation
-docker run --rm -t nvcr.io/nvidia/deepstream:6.0.1-samples uname -m
+docker run --rm -t nvcr.io/nvidia/deepstream-l4t:6.1-samples uname -m
#aarch64
```
-#### 3.3.2 Download the JetPack SDK 4.6.1
+#### 3.3.2 Download the JetPack SDK 5.0
1. Download and install the [NVIDIA SDK manager](https://developer.nvidia.com/nvidia-sdk-manager)
2. Launch the SDK Manager and login with your NVIDIA developer account.
-3. Select the platform and target OS (example: Jetson AGX Xavier, `Linux Jetpack 4.6.1`) and click Continue.
+3. Select the platform and target OS (example: Jetson AGX Xavier, `Linux Jetpack 5.0`) and click Continue.
4. Under `Download & Install Options` change the download folder and select `Download now, Install later`. Agree to the license terms and click Continue.
5. Go to the download folder, and run:
@@ -138,7 +138,7 @@ Below command generates the build container
```bash
# Make sure you are in deepstream_python_apps/bindings directory
-docker build --tag=deepstream-6.0.1-ubuntu18.04-python-l4t -f qemu_docker/ubuntu-cross-aarch64.Dockerfile .
+docker build --tag=deepstream-6.1-ubuntu20.04-python-l4t -f qemu_docker/ubuntu-cross-aarch64.Dockerfile .
```
#### 3.3.4 Launch the cross-compile build container
@@ -148,7 +148,7 @@ docker build --tag=deepstream-6.0.1-ubuntu18.04-python-l4t -f qemu_docker/ubuntu
mkdir export_pyds
# Make sure the tag matches the one from Generate step above
-docker run -it -v $PWD/export_pyds:/export_pyds deepstream-6.0.1-ubuntu18.04-python-l4t bash
+docker run -it -v $PWD/export_pyds:/export_pyds deepstream-6.1-ubuntu20.04-python-l4t bash
```
#### 3.3.5 Build DeepStreamSDK python bindings
@@ -171,7 +171,7 @@ git submodule update --init
mkdir build && cd build
# Run cmake with following options
-cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=6 -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream
+cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream
# Build pybind wheel and pyds.so
make -j$(nproc)
@@ -187,7 +187,7 @@ Build output is generated in the created export_pyds directory (deepstream_pytho
### 4.1 Installing the pip wheel
```
apt install libgirepository1.0-dev libcairo2-dev
-pip3 install ./pyds-1.1.1-py3-none*.whl
+pip3 install ./pyds-1.1.2-py3-none*.whl
```
#### 4.1.1 pip wheel troubleshooting
diff --git a/bindings/include/utils.hpp b/bindings/include/utils.hpp
index d98b5c9..ad6d308 100644
--- a/bindings/include/utils.hpp
+++ b/bindings/include/utils.hpp
@@ -31,6 +31,7 @@
#include "nvdsmeta_schema.h"
#include
#include
+#include
#include
#include
#include "../docstrings/pydocumentation.h"
@@ -70,16 +71,38 @@ namespace pydeepstream::utils {
/// Stores the provided std::function statically in the instanciated templated struct
static void store(const function_type &f) {
- instance().fn_ = f;
+ const std::lock_guard lock(instance().mut_);
+ auto &inst = instance();
+ if(!inst.stopped_)
+ inst.fn_ = f;
}
- static void free_instance(){
- instance().fn_ = {};
+ static void
+ __attribute__((optimize("O0")))
+ free_instance(){
+ auto &inst = instance();
+ const std::lock_guard lock(inst.mut_);
+ auto &opt = inst.fn_;
+ if (opt.has_value()){
+ opt.reset();
+ }
+ inst.stopped_=true;
}
/// Helps defining the actual function pointer needed
static RetValue invoke(ArgTypes... args) {
- return instance().fn_.value()(args...);
+ const std::lock_guard lock(instance().mut_);
+ auto &opt = instance().fn_;
+ // here we check if the function's content is valid before calling it,
+ // as it can be empty if free instance is called. In that case we return
+ // the default value of RetValue type. RetValue must have a default
+ // constructor with no parameters.
+ if (!opt.has_value())
+ return RetValue();
+ auto &fun = opt.value();
+ if (!fun)
+ return RetValue();
+ return fun(args...);
}
/// Declares the type of pointer returned
@@ -96,6 +119,8 @@ namespace pydeepstream::utils {
/// contains a storage for an std::function.
function_type fn_;
+ std::mutex mut_;
+ bool stopped_=false;
};
// Is used to keep track of font names without duplicate
@@ -140,6 +165,7 @@ namespace pydeepstream::utils {
template
typename function_storage::pointer_type
+ __attribute__((optimize("O0")))
free_fn_ptr_from_std_function(const std::function &f) {
typedef function_storage custom_fun;
custom_fun::free_instance();
diff --git a/bindings/packaging/setup.py b/bindings/packaging/setup.py
index 7e4492f..c5cf3a0 100644
--- a/bindings/packaging/setup.py
+++ b/bindings/packaging/setup.py
@@ -17,7 +17,7 @@
setuptools.setup(
name="pyds",
- version="1.1.1",
+ version="1.1.2",
author="NVIDIA",
description="Install precompiled DeepStream Python bindings extension",
url="nvidia.com",
diff --git a/bindings/qemu_docker/ubuntu-cross-aarch64.Dockerfile b/bindings/qemu_docker/ubuntu-cross-aarch64.Dockerfile
index c984d22..1b19839 100644
--- a/bindings/qemu_docker/ubuntu-cross-aarch64.Dockerfile
+++ b/bindings/qemu_docker/ubuntu-cross-aarch64.Dockerfile
@@ -1,4 +1,4 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-FROM nvcr.io/nvidia/deepstream-l4t:6.0.1-samples
+FROM nvcr.io/nvidia/deepstream-l4t:6.1-samples
LABEL maintainer="NVIDIA CORPORATION"
# Set timezone.
@@ -67,17 +67,14 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libegl1-mesa-dev \
librabbitmq-dev
-RUN cd /usr/local/bin &&\
- ln -s /usr/bin/python3 python &&\
- ln -s /usr/bin/pip3 pip
RUN pip3 install --upgrade pip
RUN pip3 install setuptools>=41.0.0
-COPY docker/jetpack_files/Jetson*Linux_R32*aarch64.tbz2 /bsp_files/
+COPY docker/jetpack_files/Jetson*Linux_R*aarch64.tbz2 /bsp_files/
# Copy libs from BSP
RUN cd /bsp_files \
- && tar -jxpf Jetson*Linux_R32*aarch64.tbz2 \
+ && tar -jxpf Jetson*Linux_R*aarch64.tbz2 \
&& cd Linux_for_Tegra/nv_tegra \
&& tar -jxpf nvidia_drivers.tbz2 \
&& cp -aprf usr/lib/aarch64-linux-gnu/tegra/libnvbuf*.so.1.0.0 /opt/nvidia/deepstream/deepstream/lib/ \
diff --git a/bindings/src/bindfunctions.cpp b/bindings/src/bindfunctions.cpp
index 1397f2b..df83c81 100644
--- a/bindings/src/bindfunctions.cpp
+++ b/bindings/src/bindfunctions.cpp
@@ -521,6 +521,7 @@ namespace pydeepstream {
std::function const &func) {
utils::set_copyfunc(meta, func);
},
+ py::call_guard(),
"meta"_a,
"func"_a,
pydsdoc::methodsDoc::user_copyfunc);
@@ -544,6 +545,7 @@ namespace pydeepstream {
std::function const &func) {
utils::set_freefunc(meta, func);
},
+ py::call_guard(),
"meta"_a,
"func"_a,
pydsdoc::methodsDoc::user_releasefunc);
@@ -559,7 +561,9 @@ namespace pydeepstream {
m.def("unset_callback_funcs",
[]() {
utils::release_all_func();
- });
+ },
+ py::call_guard()
+ );
m.def("alloc_char_buffer",
[](size_t size) {
@@ -742,4 +746,4 @@ namespace pydeepstream {
"data"_a,
pydsdoc::methodsDoc::get_segmentation_masks);
}
-}
\ No newline at end of file
+}
diff --git a/bindings/src/pyds.cpp b/bindings/src/pyds.cpp
index b7cb58e..5ecb8b4 100644
--- a/bindings/src/pyds.cpp
+++ b/bindings/src/pyds.cpp
@@ -34,7 +34,7 @@
#include */
-#define PYDS_VERSION "1.1.1"
+#define PYDS_VERSION "1.1.2"
using namespace std;
namespace py = pybind11;
diff --git a/bindings/src/utils.cpp b/bindings/src/utils.cpp
index ed412c8..772f9ba 100644
--- a/bindings/src/utils.cpp
+++ b/bindings/src/utils.cpp
@@ -74,10 +74,12 @@ namespace pydeepstream::utils {
meta->base_meta.release_func = get_fn_ptr_from_std_function(func);
}
- void release_all_func() {
+ void
+ __attribute__((optimize("O0")))
+ release_all_func() {
// these dummy functions are only used for their type to match the appropriate template
- const auto &dummy_copy_func = std::function([](gpointer a, gpointer b){ return a;});
- const auto &dummy_free_func = std::function([](gpointer a, gpointer b){});
+ const auto dummy_copy_func = std::function();
+ const auto dummy_free_func = std::function();
// Here the functions must be freed because otherwise the destructor will be stuck with GIL
// this is related to pybind11 behavior.
// As far as I understand pybind11 does not expect us to store functions in a static storage
@@ -88,7 +90,6 @@ namespace pydeepstream::utils {
}
-
void generate_ts_rfc3339(char *buf, int buf_size) {
time_t tloc;
struct tm tm_log{};
@@ -102,4 +103,4 @@ namespace pydeepstream::utils {
g_snprintf(strmsec, sizeof(strmsec), ".%.3dZ", ms);
strncat(buf, strmsec, buf_size);
}
-}
\ No newline at end of file
+}
diff --git a/docs/README.md b/docs/README.md
index d08e994..0f254e0 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -2,17 +2,19 @@ Please follow the following steps to build html files
requirements
===================
-1. python 3.6 (The default `python --version` should be 3.6)
+1. python 3.8 (The default `python --version` should be 3.8)
2. sphinx (>=4.2)
3. breathe extension
4. recommonmark
+5. sphinx_rtd_theme
installation
===================
```bash
-pip install sphinx
-pip install breathe
-pip install recommonmark
+pip3 install sphinx
+pip3 install breathe
+pip3 install recommonmark
+pip3 install sphinx_rtd_theme
```
1. Run parse_bindings.py to generate rst files for classes and enums
diff --git a/docs/conf.py b/docs/conf.py
index 661aa41..9208c40 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -38,7 +38,7 @@
project = 'Deepstream'
copyright = '2019-2022, NVIDIA.'
author = 'NVIDIA'
-version = 'Deepstream Version: 6.0.1'
+version = 'Deepstream Version: 6.1'
release = version
diff --git a/docs/make.bat b/docs/make.bat
index 922152e..2119f51 100644
--- a/docs/make.bat
+++ b/docs/make.bat
@@ -1,35 +1,35 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
- set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=.
-set BUILDDIR=_build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
- echo.
- echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
- echo.installed, then set the SPHINXBUILD environment variable to point
- echo.to the full path of the 'sphinx-build' executable. Alternatively you
- echo.may add the Sphinx directory to PATH.
- echo.
- echo.If you don't have Sphinx installed, grab it from
- echo.http://sphinx-doc.org/
- exit /b 1
-)
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=.
+set BUILDDIR=_build
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+ echo.
+ echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+ echo.installed, then set the SPHINXBUILD environment variable to point
+ echo.to the full path of the 'sphinx-build' executable. Alternatively you
+ echo.may add the Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.http://sphinx-doc.org/
+ exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
+
+:end
+popd
diff --git a/tests/integration/README.md b/tests/integration/README.md
index 91b837b..34d09ee 100644
--- a/tests/integration/README.md
+++ b/tests/integration/README.md
@@ -55,7 +55,7 @@ python3.8 -m venv env
### step3
```
. env/bin/activate
-pip install pyds-1.0.2-py3-none-*.whl
+pip install pyds-1.1.2-py3-none-*.whl
pip install pytest
cd ../../tests/integration
pytest test.py
diff --git a/tests/integration/test.py b/tests/integration/test.py
index e5e4655..f8b9c89 100644
--- a/tests/integration/test.py
+++ b/tests/integration/test.py
@@ -24,7 +24,7 @@
from tests.common.tracker_utils import get_tracker_properties_from_config
from tests.common.utils import is_aarch64
-VIDEO_PATH1 = "/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264"
+VIDEO_PATH1 = "/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264"
STANDARD_PROPERTIES1 = {
"file-source": {
"location": VIDEO_PATH1