GstInference/Example pipelines/TX2: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
mNo edit summary
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{GstInference/Head|previous=Example pipelines/NANO|next=Example pipelines/Xavier|keywords=GstInference|title=GstInference GStreamer pipelines for Jetson TX2}}
{{GstInference/Head|previous=Example pipelines/NANO|next=Example pipelines/Xavier|metakeywords=GstInference|title=GstInference GStreamer pipelines for Jetson TX2}}
</noinclude>
</noinclude>


Line 31: Line 31:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 62: Line 64:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 84: Line 88:
=== Inceptionv4 inference on camera stream using TensorFlow ===
=== Inceptionv4 inference on camera stream using TensorFlow ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.


====Nvidia Camera====
====NVIDIA Camera====


* Pipeline
* Pipeline
Line 94: Line 98:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
Line 107: Line 113:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 132: Line 140:
===Inceptionv4 visualization with classification overlay Tensorflow===
===Inceptionv4 visualization with classification overlay Tensorflow===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera====
====NVIDIA Camera====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 141: Line 149:
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
LABELS='imagenet_labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
Line 157: Line 167:
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
LABELS='imagenet_labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 164: Line 176:
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>


* Output
* Output
[[File:Tx2-snap-classification.png|500px|center|thumb|Example classification overlay output]]
[[File:Tx2-snap-classification-cropped.png|500px|center|thumb|Example classification overlay output|link=]]


=== TinyYolov2 inference on image file using Tensorflow ===
=== TinyYolov2 inference on image file using Tensorflow ===
Line 178: Line 189:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 196: Line 209:
0:00:07.662473455 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662473455 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662769998 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.662769998 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
</syntaxhighlight>
</syntaxhighlight>


Line 208: Line 220:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 230: Line 244:
=== TinyYolov2 inference on camera stream using Tensorflow ===
=== TinyYolov2 inference on camera stream using Tensorflow ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera====
====NVIDIA Camera====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 238: Line 252:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
Line 250: Line 266:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 272: Line 290:
=== TinyYolov2 visualization with detection overlay Tensorflow ===
=== TinyYolov2 visualization with detection overlay Tensorflow ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera ====
====NVIDIA Camera ====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 281: Line 299:
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 \
GST_DEBUG=tinyyolov2:6 \
gst-launch-1.0 \
gst-launch-1.0 \
Line 298: Line 318:
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 305: Line 327:
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>


* Output
* Output
[[File:Tx2-snap-detection.png|500px|center|thumb|Example detection overlay output]]
[[File:Tx2-snap-detection-cropped.png|500px|center|thumb|Example detection overlay output|link=]]


=== FaceNet visualization with embedding overlay Tensorflow ===
=== FaceNet visualization with embedding overlay Tensorflow ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
* LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.
* LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.


====Nvidia Camera ====
====NVIDIA Camera ====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 324: Line 345:
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \
Line 341: Line 364:
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 351: Line 376:


* Output
* Output
[[File:Embeddingoverlay.png|500px|center|thumb|Example embedding overlay output]]
[[File:Embeddingoverlay.png|500px|center|thumb|Example embedding overlay output|link=]]


== TensorFlow-Lite ==
== TensorFlow-Lite ==
Line 362: Line 387:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 380: Line 407:
0:02:22.678740356 30355      0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.678740356 30355      0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.678892356 30355      0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.678892356 30355      0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
</syntaxhighlight>
</syntaxhighlight>


Line 392: Line 418:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 414: Line 442:
=== Inceptionv4 inference on camera stream using TensorFlow-Lite ===
=== Inceptionv4 inference on camera stream using TensorFlow-Lite ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.


====Nvidia Camera====
====NVIDIA Camera====


* Pipeline
* Pipeline
Line 423: Line 451:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
Line 435: Line 465:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 460: Line 492:
===Inceptionv4 visualization with classification overlay TensorFlow-Lite===
===Inceptionv4 visualization with classification overlay TensorFlow-Lite===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera====
====NVIDIA Camera====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 467: Line 499:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
Line 481: Line 515:
MODEL_LOCATION='graph_inceptionv4.tflite'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 488: Line 524:
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>


* Output
* Output
[[File:Tx2-snap-classification.png|500px|center|thumb|Example classification overlay output]]
[[File:Tx2-snap-classification-cropped.png|500px|center|thumb|Example classification overlay output|link=]]


=== TinyYolov2 inference on image file using TensorFlow-Lite ===
=== TinyYolov2 inference on image file using TensorFlow-Lite ===
Line 501: Line 536:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
Line 519: Line 556:
0:00:07.662473455 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662473455 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662769998 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.662769998 30513      0x5accf0 LOG              tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
</syntaxhighlight>
</syntaxhighlight>


Line 530: Line 566:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
Line 552: Line 590:
=== TinyYolov2 inference on camera stream using TensorFlow-Lite ===
=== TinyYolov2 inference on camera stream using TensorFlow-Lite ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera====
====NVIDIA Camera====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 559: Line 597:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
Line 570: Line 610:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 592: Line 634:
=== TinyYolov2 visualization with detection overlay TensorFlow-Lite ===
=== TinyYolov2 visualization with detection overlay TensorFlow-Lite ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with NVIDIA Libargus API or V4l2.
====Nvidia Camera ====
====NVIDIA Camera ====
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 599: Line 641:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 \
GST_DEBUG=tinyyolov2:6 \
gst-launch-1.0 \
gst-launch-1.0 \
Line 614: Line 658:
MODEL_LOCATION='graph_tinyyolov2.tflite'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 624: Line 670:


* Output
* Output
[[File:Tx2-snap-detection.png|500px|center|thumb|Example detection overlay output]]
[[File:Tx2-snap-detection-cropped.png|500px|center|thumb|Example detection overlay output|link=]]





Latest revision as of 12:24, 21 May 2024




Previous: Example pipelines/NANO Index Next: Example pipelines/Xavier






Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help.

Tensorflow

Inceptionv4 inference on image file using Tensorflow

IMAGE_FILE=cat.jpg
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER -
  • Output
0:02:22.005527960 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.168796723 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.168947603 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.169237202 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.339393463 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.339496918 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.339701878 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.507804674 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.507950081 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.508232128 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.678740356 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.678892356 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)

Inceptionv4 inference on video file using TensorFlow

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:11.728307018 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:11.892030154 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:11.892258185 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.686857)
0:00:11.892556808 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.065318539 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.065467786 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.673300)
0:00:12.065759849 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.247159695 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.247309295 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.669102)
0:00:12.247612718 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.419172436 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.419321396 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.667991)

Inceptionv4 inference on camera stream using TensorFlow

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net backend=tensorflow model-location=$MODEL_LOCATION backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:12.199657219  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.365172092  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.365271548  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.196048)
0:00:12.365421435  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.530604726  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.530700501  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.179406)
0:00:12.530848565  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.697053611  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.697147818  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.144033)
0:00:12.697295530  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.862007878  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.862104134  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.157707)
0:00:12.862252645  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:13.027090881  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:13.027190273  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.142998)

Inceptionv4 visualization with classification overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_model \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_bypass \
inceptionv4 name=net backend=tensorflow model-location=$MODEL_LOCATION backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! nvvidconv ! nvoverlaysink sync=false -v

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
  • Output
Example classification overlay output

TinyYolov2 inference on image file using Tensorflow

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes
  • Pipeline
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:07.137677204 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.266928985 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.267080761 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.267382968 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.394225925 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.394431653 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.394858915 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.527547133 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.527753020 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.528080219 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.662473455 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662769998 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]

TinyYolov2 inference on video file using Tensorflow

  • Get the graph used on this example from this link
  • You will need a video file from one of TinyYOLO classes
  • Pipeline
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:07.245722660 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.360377432 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.360586455 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.105452, y:-9.139365, width:445.139551, height:487.967720, prob:14.592537]
0:00:07.360859318 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.489190714 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.489382873 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.140270, y:-9.193503, width:445.228762, height:488.028163, prob:14.596972]
0:00:07.489736216 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.629190069 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.629379733 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.281640, y:-9.164348, width:445.512899, height:487.908826, prob:14.596945]
0:00:07.629717876 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.761072493 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.761271244 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.338202, y:-9.202273, width:445.624841, height:487.954952, prob:14.592540]

TinyYolov2 inference on camera stream using Tensorflow

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:39.754924355  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:39.876816786  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:39.876914225  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:147.260736, y:116.184709, width:134.389472, height:245.113627, prob:8.375733]
0:00:39.877085489  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:39.999699614  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:39.999799198  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:146.957935, y:117.902112, width:134.883825, height:242.143126, prob:7.982772]
0:00:39.999962206  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:40.118613969  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:40.118712017  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:147.147349, y:116.562615, width:134.469630, height:244.181931, prob:8.139100]
0:00:40.118882641  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:40.264861052  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:40.264964828  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:146.618516, y:117.162739, width:135.454029, height:243.785573, prob:8.112847]

TinyYolov2 visualization with detection overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_model \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_bypass \
tinyyolov2 name=net backend=tensorflow model-location=$MODEL_LOCATION backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass !  detectionoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4  ! nvvidconv ! nvoverlaysink sync=false -v

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
Example detection overlay output

FaceNet visualization with embedding overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.
  • LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_facenetv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='output'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \
t. ! queue ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_facenetv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='output'
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false


  • Output
Example embedding overlay output

TensorFlow-Lite

Inceptionv4 inference on image file using TensorFlow-Lite

IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:02:22.005527960 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.168796723 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.168947603 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.169237202 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.339393463 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.339496918 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.339701878 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.507804674 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.507950081 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)
0:02:22.508232128 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:02:22.678740356 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:02:22.678892356 30355       0x5accf0 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314)

Inceptionv4 inference on video file using TensorFlow-Lite

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:11.728307018 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:11.892030154 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:11.892258185 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.686857)
0:00:11.892556808 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.065318539 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.065467786 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.673300)
0:00:12.065759849 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.247159695 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.247309295 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.669102)
0:00:12.247612718 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.419172436 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.419321396 30399       0x5ad000 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.667991)

Inceptionv4 inference on camera stream using TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \
inceptionv4 name=net backend=tflite model-location=$MODEL_LOCATION labels="$(cat $LABELS)"

V4L2

  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:12.199657219  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.365172092  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.365271548  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.196048)
0:00:12.365421435  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.530604726  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.530700501  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.179406)
0:00:12.530848565  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.697053611  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.697147818  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.144033)
0:00:12.697295530  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:12.862007878  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.862104134  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.157707)
0:00:12.862252645  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:200:gst_inceptionv4_preprocess:<net> Preprocess
0:00:13.027090881  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess
0:00:13.027190273  4675      0x10ee590 LOG              inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 774 : (0.142998)

Inceptionv4 visualization with classification overlay TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_model \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_bypass \
inceptionv4 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)" \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! nvvidconv ! nvoverlaysink sync=false -v

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_inceptionv4.tflite'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)" \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
  • Output
Example classification overlay output

TinyYolov2 inference on image file using TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes
  • Pipeline
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)"
  • Output
0:00:07.137677204 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.266928985 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.267080761 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.267382968 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.394225925 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.394431653 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.394858915 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.527547133 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.527753020 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]
0:00:07.528080219 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.662473455 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.662769998 30513       0x5accf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609]

TinyYolov2 inference on video file using TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need a video file from one of TinyYOLO classes
  • Pipeline
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \
tinyyolov2 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)"
  • Output
0:00:07.245722660 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.360377432 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.360586455 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.105452, y:-9.139365, width:445.139551, height:487.967720, prob:14.592537]
0:00:07.360859318 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.489190714 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.489382873 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.140270, y:-9.193503, width:445.228762, height:488.028163, prob:14.596972]
0:00:07.489736216 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.629190069 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.629379733 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.281640, y:-9.164348, width:445.512899, height:487.908826, prob:14.596945]
0:00:07.629717876 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.761072493 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.761271244 30545       0x5ad000 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-46.338202, y:-9.202273, width:445.624841, height:487.954952, prob:14.592540]

TinyYolov2 inference on camera stream using TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \
tinyyolov2 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)"

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
  • Output
0:00:39.754924355  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:39.876816786  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:39.876914225  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:147.260736, y:116.184709, width:134.389472, height:245.113627, prob:8.375733]
0:00:39.877085489  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:39.999699614  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:39.999799198  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:146.957935, y:117.902112, width:134.883825, height:242.143126, prob:7.982772]
0:00:39.999962206  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:40.118613969  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:40.118712017  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:147.147349, y:116.562615, width:134.469630, height:244.181931, prob:8.139100]
0:00:40.118882641  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:40.264861052  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:40.264964828  5030      0x10ee590 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:4, x:146.618516, y:117.162739, width:135.454029, height:243.785573, prob:8.112847]

TinyYolov2 visualization with detection overlay TensorFlow-Lite

  • Get the graph used on this example from this link
  • You will need a camera compatible with NVIDIA Libargus API or V4l2.

NVIDIA Camera

  • Pipeline
SENSOR_ID=0
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
GST_DEBUG=tinyyolov2:6 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_model \
t. ! queue max-size-buffers=1 leaky=downstream ! nvvidconv ! 'video/x-raw,format=(string)RGBA' ! net.sink_bypass \
tinyyolov2 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)" \
net.src_bypass !  detectionoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4  ! nvvidconv ! nvoverlaysink sync=false -v

V4L2

  • Pipeline
CAMERA='/dev/video1'
MODEL_LOCATION='graph_tinyyolov2.tflite'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net backend=tflite model-location=$MODEL_LOCATION labels="(cat $LABELS)" \
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false


  • Output
Example detection overlay output


Previous: Example pipelines/NANO Index Next: Example pipelines/Xavier