GstInference/Example pipelines/NANO: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
 
(17 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{GstInference/Head|previous=Example pipelines/PC|next=Example pipelines/TX2}}
{{GstInference/Head|previous=Example pipelines/PC|next=Example pipelines/TX2|title=GstInference GStreamer pipelines for Jetson NANO}}
</noinclude>
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
-->
{{Ambox
|type=notice
|issue=The following pipelines are deprecated and kept only as reference. If you are using v0.7 and above, please check our sample pipelines on the [[GstInference/Example pipelines with hierarchical metadata | Example Pipelines]] section.
}}
<br>


__TOC__
<table>
= Tensorflow =
<tr>
<td><div class="clear; float:right">__TOC__</div></td>
<td valign=top>
{{GStreamer debug}}
</td>
</table>


== InceptionV4 ==
== Tensorflow ==
 
=== InceptionV4 ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link].
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link].
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes].
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes].
* Use the following pipelines as examples for different scenarios.
* Use the following pipelines as examples for different scenarios.


===Image file===
====Image file====


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 21: Line 34:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 36: Line 51:
</syntaxhighlight>
</syntaxhighlight>


===Video file===
====Video file====


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 43: Line 58:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 57: Line 74:
</syntaxhighlight>
</syntaxhighlight>


===Camera stream===
====Camera stream====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 63: Line 80:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 77: Line 96:
</syntaxhighlight>
</syntaxhighlight>


===Visualization with classification overlay===
====Visualization with classification overlay====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 84: Line 103:
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
LABELS='imagenet_labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 93: Line 114:


*Output  
*Output  
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair]]
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair|link=]]


== InceptionV1 ==
=== InceptionV1 ===


===RTSP Camera stream===
====RTSP Camera stream====


* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
* You will need a v4l2 compatible camera
* You will need a v4l2 compatible camera
 
<br>
'''Server Pipeline''' which runs on the host PC
'''Server Pipeline''' which runs on the host PC
* You will need to install a RidgeRun proprietary [[GstRtspSink | gst-rtsp-sink]] plugin on the PC. Please [[GstInference/Contact_us | contact Ridegrun]] 


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 108: Line 130:
</syntaxhighlight>
</syntaxhighlight>


* Output
<syntaxhighlight lang=bash>
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
</syntaxhighlight>
'''Install dependencies on the NANO board'''
<!----
<syntaxhighlight lang="bash" line='line' style="background-color:FFFF66">
--->
<syntaxhighlight lang="bash" line="line" style="background-color:#FFFF66; color:blue;">
sudo apt install \
libgstrtspserver-1.0-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
</syntaxhighlight>
<br>
'''Client Pipeline''' which runs on the NANO board
'''Client Pipeline''' which runs on the NANO board
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 115: Line 158:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
export CUDA_VISIBLE_DEVICES=-1
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
</syntaxhighlight>
* Output
* Output
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
 
0:00:08.679606626 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.295041)
0:00:08.679695321 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:08.892169471 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:08.892256499 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.256458)
0:00:08.892378058 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:09.101159620 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:09.101244877 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.243692)
</syntaxhighlight>
</syntaxhighlight>


 
===TinyYoloV2===
==TinyYoloV2==
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need an image file from one of TinyYOLO classes
* You will need an image file from one of TinyYOLO classes


===Image file===
====Image file====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
IMAGE_FILE='cat.jpg'
IMAGE_FILE='cat.jpg'
Line 133: Line 184:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 144: Line 197:
</syntaxhighlight>
</syntaxhighlight>


===Video file===
====Video file====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
VIDEO_FILE='cat.mp4'
Line 150: Line 203:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 162: Line 217:
</syntaxhighlight>
</syntaxhighlight>


===Camera stream===
====Camera stream====
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with Nvidia Libargus API or V4l2.
Line 172: Line 227:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 184: Line 241:
</syntaxhighlight>
</syntaxhighlight>


===Visualization with detection overlay===
====Visualization with detection overlay====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video1'
CAMERA='/dev/video1'
Line 191: Line 248:
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 200: Line 259:


*Output
*Output
[[File:TinyYolo barber chair label.png|center|thumb|tinyYolo barber chair by tinyYolo]]
[[File:TinyYolo barber chair label.png|center|thumb|tinyYolo barber chair by tinyYolo|link=]]


==FaceNet==
===FaceNet===
===Visualization with detection overlay===
====Visualization with detection overlay====
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with Nvidia Libargus API or V4l2.
Line 215: Line 274:
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 222: Line 283:
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>
== Tensorflow Lite ==
=== InceptionV4 ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tflite this link].
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes].
* Use the following pipelines as examples for different scenarios.
====Image file====
<syntaxhighlight lang=bash>
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:41.102961125  9500  0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.103261600  9500  0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.414504525  9500  0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:41.415032923  9500  0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.415468297  9500  0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.726504445  9500  0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
</syntaxhighlight>
====Video file====
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:43.428868204  9619  0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.436573728  9619  0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,875079)
0:00:43.473135944  9619  0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:43.861247785  9619  0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.861550447  9619  0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,872448)
</syntaxhighlight>
====Camera stream====
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:47.149540519  9748  0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.149877140  9748  0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,702133)
0:00:47.150562517  9748  0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:47.460348086  9748  0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.460709916  9748  0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,705862)
</syntaxhighlight>
====Visualization with classification overlay====
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \
net.src_bypass ! videoconvert ! inferenceoverlay font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
*Output
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair|link=]]
=== InceptionV1 ===
====RTSP Camera stream====
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tflite this link]
* You will need a v4l2 compatible camera
<br>
'''Server Pipeline''' which runs on the host PC
* You will need to install a RidgeRun proprietary [[GstRtspSink | gst-rtsp-sink]] plugin on the PC. Please [[GstInference/Contact_us | contact Ridegrun]]
<syntaxhighlight lang=bash>
gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! queue ! x265enc option-string="keyint=30:min-keyint=30:repeat-headers=1" ! video/x-h265,  width=640, height=480, mapping=/stream1 ! queue ! rtspsink service=5000
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
</syntaxhighlight>
'''Install dependencies on the NANO board'''
<!-----
<syntaxhighlight lang="bash" line='line' style="background-color:cornsilk">
---->
<syntaxhighlight lang="bash" line="line" style="background-color:#FFFF66; color:blue;">
sudo apt install \
libgstrtspserver-1.0-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev
</syntaxhighlight>
<br>
'''Client Pipeline''' which runs on the NANO board
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv1.tflite'
LABELS='labels.txt'
export CUDA_VISIBLE_DEVICES=-1
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:08.679606626 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.295041)
0:00:08.679695321 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:08.892169471 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:08.892256499 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.256458)
0:00:08.892378058 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:09.101159620 10086  0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:09.101244877 10086  0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.243692)
</syntaxhighlight>
===TinyYoloV2===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tflite this link]
* You will need an image file from one of TinyYOLO classes
====Image file====
<syntaxhighlight lang=bash>
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:24.558985002  9909  0x557d3278a0 LOG              tinyyolov2 gsttinyyolov2.c:288:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:24.576012429  9909  0x557d3278a0 LOG              tinyyolov2 gstinferencedebug.c:92:gst_inference_print_boxes:<net> Box: [class:7, x:5,710080, y:115,575158, width:345,341579, height:304,490976, prob:14,346013]
</syntaxhighlight>
====Video file====
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:07.245722660 30545      0x5ad000 LOG              tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.360377432 30545      0x5ad000 LOG              tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.360586455 30545      0x5ad000 LOG              tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.105452, y:-9.139365, width:445.139551, height:487.967720, prob:14.592537]
</syntaxhighlight>
====Visualization with detection overlay====
<syntaxhighlight lang=bash>
CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"  \
net.src_bypass ! videoconvert ! inferenceoverlay font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
*Output
[[File:TinyYolo barber chair label.png|center|thumb|tinyYolo barber chair by tinyYolo|link=]]





Latest revision as of 12:02, 21 May 2024




Previous: Example pipelines/PC Index Next: Example pipelines/TX2





Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help.

Tensorflow

InceptionV4

  • Get the graph used on this example from this link.
  • You will need an image file from one of ImageNet classes.
  • Use the following pipelines as examples for different scenarios.

Image file

IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graphs/InceptionV4_TensorFlow/graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:41.102961125  9500   0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.103261600  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.414504525  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:41.415032923  9500   0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.415468297  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.726504445  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess

Video file

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graphs/InceptionV4_TensorFlow/graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:43.428868204  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.436573728  9619   0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,875079)
0:00:43.473135944  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:43.861247785  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.861550447  9619   0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,872448)

Camera stream

CAMERA='/dev/video0'
MODEL_LOCATION='graphs/InceptionV4_TensorFlow/graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:47.149540519  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.149877140  9748   0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,702133)
0:00:47.150562517  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:47.460348086  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.460709916  9748   0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,705862)

Visualization with classification overlay

CAMERA='/dev/video0'
MODEL_LOCATION='graphs/InceptionV4_TensorFlow/graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
  • Output
inceptionv2_barberchair

InceptionV1

RTSP Camera stream

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera


Server Pipeline which runs on the host PC

gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! queue ! x265enc option-string="keyint=30:min-keyint=30:repeat-headers=1" ! video/x-h265,  width=640, height=480, mapping=/stream1 ! queue ! rtspsink service=5000
  • Output
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...

Install dependencies on the NANO board

sudo apt install \
libgstrtspserver-1.0-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev


Client Pipeline which runs on the NANO board

CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
export CUDA_VISIBLE_DEVICES=-1
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:08.679606626 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.295041)
0:00:08.679695321 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:08.892169471 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:08.892256499 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.256458)
0:00:08.892378058 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:09.101159620 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:09.101244877 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.243692)

TinyYoloV2

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes

Image file

IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graphs/TinyYoloV2_TensorFlow/graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:24.558985002  9909   0x557d3278a0 LOG               tinyyolov2 gsttinyyolov2.c:288:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:24.576012429  9909   0x557d3278a0 LOG               tinyyolov2 gstinferencedebug.c:92:gst_inference_print_boxes:<net> Box: [class:7, x:5,710080, y:115,575158, width:345,341579, height:304,490976, prob:14,346013]

Video file

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graphs/TinyYoloV2_TensorFlow/graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:07.245722660 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.360377432 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.360586455 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.105452, y:-9.139365, width:445.139551, height:487.967720, prob:14.592537]

Camera stream

  • Get the graph used on this example from this link
  • You will need a camera compatible with Nvidia Libargus API or V4l2.
  • LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.
CAMERA='/dev/video0'
MODEL_LOCATION='graphs/TinyYoloV2_TensorFlow/graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:39.754924355  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:39.876816786  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:39.876914225  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:147.260736, y:116.184709, width:134.389472, height:245.113627, prob:8.375733]

Visualization with detection overlay

CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
tinyYolo barber chair by tinyYolo

FaceNet

Visualization with detection overlay

  • Get the graph used on this example from this link
  • You will need a camera compatible with Nvidia Libargus API or V4l2.
  • LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.
CAMERA='/dev/video1'
MODEL_LOCATION='graph_facenetv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='output'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false

Tensorflow Lite

InceptionV4

  • Get the graph used on this example from this link.
  • You will need an image file from one of ImageNet classes.
  • Use the following pipelines as examples for different scenarios.

Image file

IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:41.102961125  9500   0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.103261600  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.414504525  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:41.415032923  9500   0x55cd3e54a0 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,651213)
0:00:41.415468297  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:41.726504445  9500   0x55cd3e54a0 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess

Video file

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:43.428868204  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.436573728  9619   0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,875079)
0:00:43.473135944  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:43.861247785  9619   0x55b19b6b70 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:43.861550447  9619   0x55b19b6b70 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0,872448)

Camera stream

CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:47.149540519  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.149877140  9748   0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,702133)
0:00:47.150562517  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:208:gst_inceptionv4_preprocess:<net> Preprocess
0:00:47.460348086  9748   0x5592110b20 LOG              inceptionv4 gstinceptionv4.c:219:gst_inceptionv4_postprocess:<net> Postprocess
0:00:47.460709916  9748   0x5592110b20 LOG              inceptionv4 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,705862)

Visualization with classification overlay

CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \
net.src_bypass ! videoconvert ! inferenceoverlay font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
  • Output
inceptionv2_barberchair

InceptionV1

RTSP Camera stream

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera


Server Pipeline which runs on the host PC

gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! queue ! x265enc option-string="keyint=30:min-keyint=30:repeat-headers=1" ! video/x-h265,  width=640, height=480, mapping=/stream1 ! queue ! rtspsink service=5000
  • Output
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...

Install dependencies on the NANO board

sudo apt install \
libgstrtspserver-1.0-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-good1.0-dev \
libgstreamer-plugins-bad1.0-dev


Client Pipeline which runs on the NANO board

CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv1.tflite'
LABELS='labels.txt'
export CUDA_VISIBLE_DEVICES=-1
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:08.679606626 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.295041)
0:00:08.679695321 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:08.892169471 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:08.892256499 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.256458)
0:00:08.892378058 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:142:gst_inceptionv1_preprocess:<net> Preprocess
0:00:09.101159620 10086   0x5599c01cf0 LOG              inceptionv1 gstinceptionv1.c:153:gst_inceptionv1_postprocess:<net> Postprocess
0:00:09.101244877 10086   0x5599c01cf0 LOG              inceptionv1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 665 : (0.243692)

TinyYoloV2

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes

Image file

IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:24.558985002  9909   0x557d3278a0 LOG               tinyyolov2 gsttinyyolov2.c:288:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:24.576012429  9909   0x557d3278a0 LOG               tinyyolov2 gstinferencedebug.c:92:gst_inference_print_boxes:<net> Box: [class:7, x:5,710080, y:115,575158, width:345,341579, height:304,490976, prob:14,346013]

Video file

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:07.245722660 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.360377432 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.360586455 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.105452, y:-9.139365, width:445.139551, height:487.967720, prob:14.592537]

Visualization with detection overlay

CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"  \
net.src_bypass ! videoconvert ! inferenceoverlay font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
tinyYolo barber chair by tinyYolo


Previous: Example pipelines/PC Index Next: Example pipelines/TX2