GstInference/Example pipelines/IMX8: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
mNo edit summary
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{GstInference/Head|previous=Example pipelines/Xavier|next=Example Applications}}
{{GstInference/Head|previous=Example pipelines/Xavier|next=Example pipelines with hierarchical metadata|title=GstInference GStreamer pipelines for IMX8}}
</noinclude>
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
-->
 
{{Ambox
{{DISPLAYTITLE:GstInference GStreamer pipelines for IMX8|noerror}}
|type=notice
 
|issue=The following pipelines are deprecated and kept only as reference. If you are using v0.7 and above, please check our sample pipelines on the [[GstInference/Example pipelines with hierarchical metadata | Example Pipelines]] section.
__TOC__
}}
<br>
<table>
<tr>
<td><div class="clear; float:right">__TOC__</div></td>
<td valign=top>
{{GStreamer debug}}
</td>
</table>
== Tested Boards ==
== Tested Boards ==
The pipelines provided in the next section were tested using a pre-built image of Ubuntu 18.04 for Nitrogen8m.
The pipelines provided in the next section were tested using a pre-built image of Ubuntu 18.04 for Nitrogen8m.
Line 27: Line 35:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
OUTPUT_LAYER='Softmax'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
Line 53: Line 63:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
OUTPUT_LAYER='Softmax'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 79: Line 91:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
OUTPUT_LAYER='Softmax'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 100: Line 114:
* You will need a v4l2 compatible camera
* You will need a v4l2 compatible camera
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'
LABELS='imagenet_labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 124: Line 141:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
Line 150: Line 169:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 176: Line 197:
INPUT_LAYER='input/Placeholder'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
</syntaxhighlight>
<syntaxhighlight lang=bash>
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
Line 203: Line 226:
OUTPUT_LAYER='add_8'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
LABELS='labels.txt'
</syntaxhighlight>
<syntaxhighlight lang=bash>
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 215: Line 240:


<noinclude>
<noinclude>
{{GstInference/Foot|Example pipelines/Xavier|Example Applications}}
{{GstInference/Foot|Example pipelines/Xavier|Example pipelines with hierarchical metadata}}
</noinclude>
</noinclude>

Latest revision as of 18:23, 22 May 2024




Previous: Example pipelines/Xavier Index Next: Example pipelines with hierarchical metadata





Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help.

Tested Boards

The pipelines provided in the next section were tested using a pre-built image of Ubuntu 18.04 for Nitrogen8m. For more information about how to fetch and flash Ubuntu 18.04 on the Nitrogen8m, please check this link from our IMX8 dedicated wiki.

Known Issues

InceptionV4 will not run on the Nitrogen8m board due to lack of RAM. Tested at i.MX8 with 2GB RAM.

Tensorflow

Inceptionv2 inference on image file using Tensorflow

IMAGE_FILE=cat.jpg
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:09.549749856 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:10.672917685 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:10.672976676 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:10.673064576 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:11.793890820 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:11.793951581 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:11.794041207 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:12.920027410 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:12.920093762 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)

Inceptionv2 inference on video file using Tensorflow

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:11.878158663 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:13.006776924 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:13.006847113 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,594995)
0:00:13.006946305 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:14.170203673 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:14.170277808 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,595920)
0:00:14.170384768 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.285901546 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.285964794 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,593185)

Inceptionv2 inference on camera stream using Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,105199)
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,093981)
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,077824)

Inceptionv2 visualization with classification overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! autovideosink sync=false
  • Output
Classification overlay example output

TinyYolov2 inference on image file using Tensorflow

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes
  • Pipeline
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:06.401015400 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:06.817243785 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:06.817315935 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
0:00:06.817426814 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.236310555 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.236379100 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
0:00:07.236486242 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.659870194 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.659942388 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]

TinyYolov2 inference on video file using Tensorflow

  • Get the graph used on this example from this link
  • You will need a video file from one of TinyYOLO classes
  • Pipeline
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:08.545063684 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:08.955522899 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:08.955600820 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,012765, y:-37,118160, width:426,351621, height:480,353663, prob:14,378592]
0:00:08.955824676 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:09.364908234 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:09.364970901 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,490694, y:-38,108817, width:427,474399, height:482,318385, prob:14,257683]
0:00:09.365090340 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:09.775848590 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:09.775932404 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-35,991940, y:-37,482425, width:426,533537, height:480,917142, prob:14,313076]

TinyYolov2 inference on camera stream using Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:06.823064776 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.242114002 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.242183276 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:116,796387, y:-31,424289, width:240,876587, height:536,305261, prob:11,859128]
0:00:07.242293677 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.660324555 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.660388215 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,453324, y:-27,681194, width:248,010337, height:528,964842, prob:11,603928]
0:00:07.660503502 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:08.079154860 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:08.079230404 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,736444, y:-33,747251, width:246,987389, height:541,188374, prob:11,888664]

TinyYolov2 visualization with detection overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! autovideosink sync=false
  • Output
Classification overlay example output


Previous: Example pipelines/Xavier Index Next: Example pipelines with hierarchical metadata