GstInference/Example pipelines/IMX8: Difference between revisions

From RidgeRun Developer Wiki
(Created page with "<noinclude> {{GstInference/Head|previous=Example pipelines|next=Example pipelines/TX2}} </noinclude> <!-- If you want a custom title for the page, un-comment and edit this lin...")
 
No edit summary
Line 9: Line 9:
= Tensorflow =
= Tensorflow =


== Inceptionv4 inference on image file using Tensorflow ==
== Inceptionv2 inference on image file using Tensorflow ==
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
IMAGE_FILE=cat.jpg
IMAGE_FILE=cat.jpg
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
</syntaxhighlight>
* Output
* Output
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
0:00:09.549749856 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:09.549749856 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:10.672917685 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:10.672917685 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:10.672976676 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:10.672976676 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:10.673064576 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:10.673064576 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:11.793890820 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:11.793890820 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:11.793951581 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:11.793951581 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:11.794041207 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:11.794041207 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:12.920027410 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:12.920027410 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:12.920093762 26945      0xaf9cf0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:12.920093762 26945      0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
</syntaxhighlight>
</syntaxhighlight>


== Inceptionv4 inference on video file using Tensorflow ==
== Inceptionv2 inference on video file using Tensorflow ==
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
</syntaxhighlight>
* Output
* Output
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
0:00:11.878158663 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:11.878158663 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:13.006776924 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:13.006776924 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:13.006847113 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,594995)
0:00:13.006847113 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,594995)
0:00:13.006946305 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:13.006946305 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:14.170203673 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:14.170203673 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:14.170277808 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,595920)
0:00:14.170277808 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,595920)
0:00:14.170384768 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:14.170384768 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.285901546 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:15.285901546 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.285964794 27048      0x1d49800 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,593185)
0:00:15.285964794 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,593185)
</syntaxhighlight>
</syntaxhighlight>


== Inceptionv4 inference on camera stream using Tensorflow ==
== Inceptionv2 inference on camera stream using Tensorflow ==
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
* You will need a v4l2 compatible camera
* You will need a v4l2 compatible camera
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
</syntaxhighlight>
* Output
* Output
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,105199)
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,105199)
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,093981)
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,093981)
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,077824)
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,077824)
</syntaxhighlight>
</syntaxhighlight>


== Inceptionv4 visualization with classification overlay Tensorflow ==
== Inceptionv2 visualization with classification overlay Tensorflow ==
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
* You will need a v4l2 compatible camera
* You will need a v4l2 compatible camera
* Pipeline
* Pipeline
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv4_tensorflow.pb'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV4/Logits/Predictions'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
gst-launch-1.0 \
Line 100: Line 100:
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
t. ! queue ! net.sink_bypass \
inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>

Revision as of 17:07, 28 May 2019




Previous: Example pipelines Index Next: Example pipelines/TX2




Tensorflow

Inceptionv2 inference on image file using Tensorflow

IMAGE_FILE=cat.jpg
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:09.549749856 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:10.672917685 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:10.672976676 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:10.673064576 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:11.793890820 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:11.793951581 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
0:00:11.794041207 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:12.920027410 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:12.920093762 26945       0xaf9cf0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)

Inceptionv2 inference on video file using Tensorflow

VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:11.878158663 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:13.006776924 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:13.006847113 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,594995)
0:00:13.006946305 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:14.170203673 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:14.170277808 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,595920)
0:00:14.170384768 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.285901546 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.285964794 27048      0x1d49800 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,593185)

Inceptionv2 inference on camera stream using Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,105199)
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,093981)
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,077824)

Inceptionv2 visualization with classification overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
  • Output
Classification overlay example output

TinyYolov2 inference on image file using Tensorflow

  • Get the graph used on this example from this link
  • You will need an image file from one of TinyYOLO classes
  • Pipeline
IMAGE_FILE='cat.jpg'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:06.401015400 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:06.817243785 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:06.817315935 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
0:00:06.817426814 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.236310555 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.236379100 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
0:00:07.236486242 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.659870194 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.659942388 12340      0x1317cf0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]

TinyYolov2 inference on video file using Tensorflow

  • Get the graph used on this example from this link
  • You will need a video file from one of TinyYOLO classes
  • Pipeline
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:08.545063684 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:08.955522899 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:08.955600820 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,012765, y:-37,118160, width:426,351621, height:480,353663, prob:14,378592]
0:00:08.955824676 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:09.364908234 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:09.364970901 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,490694, y:-38,108817, width:427,474399, height:482,318385, prob:14,257683]
0:00:09.365090340 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:09.775848590 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:09.775932404 12504       0xce4400 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-35,991940, y:-37,482425, width:426,533537, height:480,917142, prob:14,313076]

TinyYolov2 inference on camera stream using Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  • Output
0:00:06.823064776 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.242114002 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.242183276 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:116,796387, y:-31,424289, width:240,876587, height:536,305261, prob:11,859128]
0:00:07.242293677 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:07.660324555 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:07.660388215 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,453324, y:-27,681194, width:248,010337, height:528,964842, prob:11,603928]
0:00:07.660503502 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:08.079154860 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess
0:00:08.079230404 12678       0xec24a0 LOG               tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,736444, y:-33,747251, width:246,987389, height:541,188374, prob:11,888664]

TinyYolov2 visualization with detection overlay Tensorflow

  • Get the graph used on this example from this link
  • You will need a v4l2 compatible camera
  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
t. ! videoconvert ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
Classification overlay example output


Previous: Example pipelines Index Next: Example pipelines/TX2