GstInference and NVIDIA DeepStream 1.5 nvcaffegie

From RidgeRun Developer Wiki


Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

DeepStream

[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds.

For this wiki used Jetson TX1 for testing. Required:

Ridgerun offers GstInference, GstInference is the GStreamer front-end for R²Inference, the actual project that handles the abstraction for different back-ends and frameworks. R²Inference will know how to deal with different vendor frameworks such as TensorFlow (x86, iMX8), OpenVX (x86, iMX8), Caffe (x86, NVIDIA), TensorRT (NVIDIA), or NCSDK (Intel) while exposing a generic/easy interface to the user.

Using DeepStream demo at Jetson

  • This wiki is for DeepStream 1.5 at Jetson (and tested at TX1) DeepStream 3.0 is available for Xavier not covered on this wiki.
tar xpvf DeepStream_SDK_on_Jetson_1.5_pre-release.tbz2
sudo tar xpvf deepstream_sdk_on_jetson.tbz2 -C /
sudo tar xpvf deepstream_sdk_on_jetson_models.tbz2 -C /
sudo ldconfig

Run the demo: Video will be displayed at HDMI output

nvgstiva-app -c ${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt 

Building the demo

Install and build:

sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev

sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so.1.0.0 \
             /usr/lib/aarch64-linux-gnu/libnvid_mapper.so

cd ${HOME}/nvgstiva-app_sources/nvgstiva-app

make

#Run the App with ./nvgstiva-app -c <config-file>
./nvgstiva-app -c ./${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt 

Doing some analysis

  • The sample application is a GStreamer application that uses NVIDIA elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at:

Basically, the pipeline is composed with (in order as elements appear):

  • Filesrc
  • Decodebin from mp3 to 720p NV12
  • nvvconv
  • nvcaffegie (this element receives as parameters profile file, caffe model and caffe model cache)
  • nvtracker
  • tee (with 4 outputs)
  • Three more nvcaffegie plugins, each one with a different model (car color, vehicle type, secondary make)
  • each one of this nvcaffegie goes into a fakesink
  • Fourth tee goes to nvvconv
  • nvosd
  • nvoverlaysink

* Note: NVIDIA elements are provided as binaries:

  • libnvcaffegie.so.1.0.0
  • libgstnvtracker.so
  • libgstnvclrdetector.so
  • libgstnvcaffegie.so

Testing with gst-launch

  • Pipeline with nvcamerasrc, one model:
GST_DEBUG=3 gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange='30 30' \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! nvcaffegie  model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt" net-stride=16 batch-size=2 roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" detected-min-w-h="0,0,0:1,0,0:2,0,0" detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 parse-func=4 net-scale-factor=0.0039215697906911373 \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
output-bbox-layer-name=Layer11_bbox output-coverage-layer-names=Layer11_cov ! queue ! nvtracker \
! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! queue ! nvoverlaysink sync=false enable-last-sample=false
  • Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! \
nvcaffegie  \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt"  \
batch-size=2 \
roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" \
detected-min-w-h="0,0,0:1,0,0:2,0,0" \
detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 \
parse-func=4 \
net-scale-factor=0.0039215697906911373 \
output-bbox-layer-name=Layer11_bbox \
output-coverage-layer-names=Layer11_cov ! \
queue ! \
nvtracker \
! queue ! tee name=t ! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false \
t. ! queue ! \
nvcaffegie  \
gie-mode = 2 \
gie-unique-id=5 \
infer-on-gie-id=1 \
class-thresh-params="0,1.000000,0.100000,3,2" \
infer-on-class-ids="2:" \
model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \
protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \
model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \
batch-size=2 \
detected-min-w-h="11,0,0:" \
detected-max-w-h="3,1920,1080:" \
roi-top-offset="0,0:" \
roi-bottom-offset="0,0:" \
model-color-format=1 \
meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \
detect-clr="0:" \
labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \
sec-class-threshold=0.510000 \
parse-func=0 \
is-classifier=TRUE \
offsets="" \
output-coverage-layer-names="softmax" \
sgie-async-mode=TRUE  \
! fakesink async=false sync=false enable-last-sample=false


  • Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels, not using tee.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! \
nvcaffegie  \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt"  \
batch-size=2 \
roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" \
detected-min-w-h="0,0,0:1,0,0:2,0,0" \
detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 \
parse-func=4 \
net-scale-factor=0.0039215697906911373 \
output-bbox-layer-name=Layer11_bbox \
output-coverage-layer-names=Layer11_cov ! \
queue ! \
nvtracker \
! queue ! \
nvcaffegie  \
gie-mode = 2 \
gie-unique-id=5 \
infer-on-gie-id=1 \
class-thresh-params="0,1.000000,0.100000,3,2" \
infer-on-class-ids="2:" \
model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \
protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \
model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \
batch-size=2 \
detected-min-w-h="11,0,0:" \
detected-max-w-h="3,1920,1080:" \
roi-top-offset="0,0:" \
roi-bottom-offset="0,0:" \
model-color-format=1 \
meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \
detect-clr="0:" \
labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \
sec-class-threshold=0.510000 \
parse-func=0 \
is-classifier=TRUE \
offsets="" \
output-coverage-layer-names="softmax" \
sgie-async-mode=TRUE  \
! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false

Links


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us
RidgeRun.ai: Artificial Intelligence | Generative AI | Machine Learning

Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page.