GstInference GStreamer pipelines on PC
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
The following pipelines are deprecated and kept only as reference. If you are using v0.7 and above, please check our sample pipelines on the Example Pipelines section. |
|
Images used for classification task
- Cat image to classify using Inception, Mobilenet and Resnet
Tensorflow
Inceptionv1
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_inceptionv1_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:09.549749856 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:10.672917685 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:10.672976676 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:10.673064576 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:11.793890820 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:11.793951581 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:11.794041207 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:12.920027410 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:12.920093762 26945 0xaf9cf0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 284 : (0,691864)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_inceptionv1_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:11.878158663 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:13.006776924 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:13.006847113 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 282 : (0,594995) 0:00:13.006946305 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:14.170203673 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:14.170277808 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 282 : (0,595920) 0:00:14.170384768 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:15.285901546 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:15.285964794 27048 0x1d49800 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 282 : (0,593185)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv1_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:14.614862363 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:15.737842669 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:15.737912053 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 838 : (0,105199) 0:00:15.738007534 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:16.855603761 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:16.855673578 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 838 : (0,093981) 0:00:16.855768558 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:199:gst_inceptionv1_preprocess:<net> Preprocess 0:00:17.980784789 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:231:gst_inceptionv1_postprocess:<net> Postprocess 0:00:17.980849612 27227 0x19cd4a0 LOG inceptionv1 gstinceptionv1.c:252:gst_inceptionv1_postprocess:<net> Highest probability is label 838 : (0,077824)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv1_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_inceptionv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:09.549749856 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:10.672917685 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:10.672976676 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:10.673064576 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:11.793890820 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:11.793951581 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:11.794041207 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:12.920027410 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:12.920093762 26945 0xaf9cf0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 284 : (0,691864)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_inceptionv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:11.878158663 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:13.006776924 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:13.006847113 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,594995) 0:00:13.006946305 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:14.170203673 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:14.170277808 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,595920) 0:00:14.170384768 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:15.285901546 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:15.285964794 27048 0x1d49800 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 282 : (0,593185)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='Softmax'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:14.614862363 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:15.737842669 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:15.737912053 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,105199) 0:00:15.738007534 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:16.855603761 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:16.855673578 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,093981) 0:00:16.855768558 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess 0:00:17.980784789 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess 0:00:17.980849612 27227 0x19cd4a0 LOG inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,077824)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='Softmax' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv3
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_inceptionv3_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:09.549749856 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:10.672917685 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:10.672976676 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:10.673064576 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:11.793890820 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:11.793951581 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:11.794041207 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:12.920027410 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:12.920093762 26945 0xaf9cf0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_inceptionv3_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:11.878158663 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:13.006776924 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:13.006847113 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,594995) 0:00:13.006946305 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:14.170203673 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:14.170277808 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,595920) 0:00:14.170384768 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:15.285901546 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:15.285964794 27048 0x1d49800 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,593185)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv3_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:14.614862363 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:15.737842669 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:15.737912053 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,105199) 0:00:15.738007534 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:16.855603761 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:16.855673578 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,093981) 0:00:16.855768558 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess 0:00:17.980784789 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess 0:00:17.980849612 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,077824)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv3_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv4
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_inceptionv4_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:09.549749856 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:10.672917685 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:10.672976676 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:10.673064576 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:11.793890820 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:11.793951581 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864) 0:00:11.794041207 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:12.920027410 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:12.920093762 26945 0xaf9cf0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 284 : (0,691864)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_inceptionv4_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:11.878158663 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:13.006776924 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:13.006847113 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,594995) 0:00:13.006946305 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:14.170203673 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:14.170277808 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,595920) 0:00:14.170384768 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:15.285901546 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:15.285964794 27048 0x1d49800 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0,593185)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv4_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV4/Logits/Predictions'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:14.614862363 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:15.737842669 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:15.737912053 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,105199) 0:00:15.738007534 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:16.855603761 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:16.855673578 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,093981) 0:00:16.855768558 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:199:gst_inceptionv4_preprocess:<net> Preprocess 0:00:17.980784789 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:231:gst_inceptionv4_postprocess:<net> Postprocess 0:00:17.980849612 27227 0x19cd4a0 LOG inceptionv4 gstinceptionv4.c:252:gst_inceptionv4_postprocess:<net> Highest probability is label 838 : (0,077824)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv4_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='InceptionV4/Logits/Predictions' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
MobileNetv2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_mobilenetv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:01.660006560 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.387938090 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.387975769 18 0x1138d90 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0.183014) 0:00:02.390193061 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.436405691 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.436427612 18 0x1138d90 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0.183014) 0:00:02.437487341 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.467100635 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.467123380 18 0x1138d90 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0.183014) 0:00:02.468190400 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.497410196 18 0x1138d90 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.497432133 18 0x1138d90 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 282 : (0.183014)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_mobilenetv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:01.901025239 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.176679623 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.176702018 60 0x248dc00 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0.306512) 0:00:02.176740543 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.208491216 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.208517379 60 0x248dc00 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0.307123) 0:00:02.208559346 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.238110702 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.238133192 60 0x248dc00 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0.318610) 0:00:02.238168437 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.267137242 60 0x248dc00 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:02.267159969 60 0x248dc00 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0.323910) 0
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_mobilenetv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:01.177456974 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:01.415812954 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:01.415834549 114 0x2db18a0 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 814 : (0.056321) 0:00:01.415870129 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:01.447472786 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:01.447492954 114 0x2db18a0 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 814 : (0.093839) 0:00:01.447522930 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:01.477011889 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:01.477031365 114 0x2db18a0 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 814 : (0.114949) 0:00:01.477061599 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:140:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:01.506820855 114 0x2db18a0 LOG mobilenetv2 gstmobilenetv2.c:151:gst_mobilenetv2_postprocess:<net> Postprocess 0:00:01.506841456 114 0x2db18a0 LOG mobilenetv2 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 814 : (0.097499)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_mobilenetv2_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Resnet50v1
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg MODEL_LOCATION='graph_resnetv1_tensorflow.pb' INPUT_LAYER='input_tensor' OUTPUT_LAYER='softmax_tensor'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:01.944768522 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.944803563 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051) 0:00:01.947003178 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.111978575 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.112000558 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051) 0:00:02.113091931 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.212289668 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.212310188 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_resnetv1_tensorflow.pb' INPUT_LAYER='input_tensor' OUTPUT_LAYER='softmax_tensor'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:00.915688134 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:00.915709354 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.537144) 0:00:00.915747394 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.018904132 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.018924929 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.538948) 0:00:01.018976948 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.120286331 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.120306927 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.525331)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_resnetv1_tensorflow.pb' INPUT_LAYER='input_tensor' OUTPUT_LAYER='softmax_tensor'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:01.842896607 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.842917966 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 425 : (0.048243) 0:00:01.842955409 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.948003304 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.948024035 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 611 : (0.065279) 0:00:01.948055304 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.052442770 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.052463202 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 611 : (0.089816)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_resnetv1_tensorflow.pb' INPUT_LAYER='input_tensor' OUTPUT_LAYER='softmax_tensor' LABELS='imagenet_labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
TinyYolov2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need an image file from one of TinyYOLO classes
- Pipeline
IMAGE_FILE='cat.jpg' MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb' INPUT_LAYER='input/Placeholder' OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:06.401015400 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:06.817243785 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:06.817315935 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:06.817426814 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:07.236310555 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:07.236379100 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:07.236486242 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:07.659870194 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:07.659942388 12340 0x1317cf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of TinyYOLO classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb' INPUT_LAYER='input/Placeholder' OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:08.545063684 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:08.955522899 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:08.955600820 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,012765, y:-37,118160, width:426,351621, height:480,353663, prob:14,378592] 0:00:08.955824676 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:09.364908234 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:09.364970901 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-36,490694, y:-38,108817, width:427,474399, height:482,318385, prob:14,257683] 0:00:09.365090340 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:09.775848590 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:09.775932404 12504 0xce4400 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:-35,991940, y:-37,482425, width:426,533537, height:480,917142, prob:14,313076]
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb' INPUT_LAYER='input/Placeholder' OUTPUT_LAYER='add_8'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:06.823064776 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:07.242114002 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:07.242183276 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:116,796387, y:-31,424289, width:240,876587, height:536,305261, prob:11,859128] 0:00:07.242293677 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:07.660324555 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:07.660388215 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,453324, y:-27,681194, width:248,010337, height:528,964842, prob:11,603928] 0:00:07.660503502 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:479:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:08.079154860 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess 0:00:08.079230404 12678 0xec24a0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:14, x:113,736444, y:-33,747251, width:246,987389, height:541,188374, prob:11,888664]
Visualization with detection overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb' INPUT_LAYER='input/Placeholder' OUTPUT_LAYER='add_8' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
- Output
TinyYolov3
Image file
- Get the graph used on this example from RidgeRun Store
- You will need an image file from one of TinyYOLO classes
- Pipeline
IMAGE_FILE='cat.jpg' MODEL_LOCATION='graph_tinyyolov3_tensorflow.pb' INPUT_LAYER='inputs' OUTPUT_LAYER='output_boxes'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:06.401015400 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:06.817243785 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:06.817315935 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:06.817426814 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.236310555 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.236379100 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:07.236486242 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.659870194 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.659942388 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of TinyYOLO classes
- Pipeline
VIDEO_FILE='cat.mp4' MODEL_LOCATION='graph_tinyyolov3_tensorflow.pb' INPUT_LAYER='inputs' OUTPUT_LAYER='output_boxes'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:08.545063684 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:08.955522899 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:08.955600820 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-36,012765, y:-37,118160, width:426,351621, height:480,353663, prob:14,378592] 0:00:08.955824676 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:09.364908234 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:09.364970901 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-36,490694, y:-38,108817, width:427,474399, height:482,318385, prob:14,257683] 0:00:09.365090340 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:09.775848590 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:09.775932404 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-35,991940, y:-37,482425, width:426,533537, height:480,917142, prob:14,313076]
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov3_tensorflow.pb' INPUT_LAYER='inputs' OUTPUT_LAYER='output_boxes'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
- Output
0:00:06.823064776 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.242114002 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.242183276 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:116,796387, y:-31,424289, width:240,876587, height:536,305261, prob:11,859128] 0:00:07.242293677 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.660324555 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.660388215 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:113,453324, y:-27,681194, width:248,010337, height:528,964842, prob:11,603928] 0:00:07.660503502 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:08.079154860 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:08.079230404 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:113,736444, y:-33,747251, width:246,987389, height:541,188374, prob:11,888664]
Visualization with detection overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov3_tensorflow.pb' INPUT_LAYER='inputs' OUTPUT_LAYER='output_boxes' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
- Output
FaceNet
Visualization with embedding overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
- LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.
CAMERA='/dev/video0' MODEL_LOCATION='graph_facenetv1_tensorflow.pb' INPUT_LAYER='input' OUTPUT_LAYER='output' LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Tensorflow Lite
Inceptionv1
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv1.tflite'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.820787752 14910 0x55fad0914ed0 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess 0:00:02.820811267 14910 0x55fad0914ed0 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,124103) 0:00:02.820816935 14910 0x55fad0914ed0 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta 0:00:02.820909931 14910 0x55fad0914ed0 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 49, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 98 Class : 283 Label : tiger cat Probability : 0,124103 Classes : 1001 }, ], predictions : [ ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv1.tflite'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.660861495 16805 0x558f50081850 LOG inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess 0:00:02.704949141 16805 0x558f50081850 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess 0:00:02.704973078 16805 0x558f50081850 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,421085) 0:00:02.704978817 16805 0x558f50081850 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta 0:00:02.705073055 16805 0x558f50081850 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 47, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 94 Class : 286 Label : Egyptian cat Probability : 0,421085 Classes : 1001 }, ], predictions : [ ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv1.tflite'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.820787752 14910 0x55fad0914ed0 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess 0:00:02.820811267 14910 0x55fad0914ed0 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0,124103) 0:00:02.820816935 14910 0x55fad0914ed0 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta 0:00:02.820909931 14910 0x55fad0914ed0 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 49, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 98 Class : 283 Label : tiger cat Probability : 0,124103 Classes : 1001 }, ], predictions : [ ] }
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv1.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv2.tflite'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:07.851985949 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess 0:00:07.931498739 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess 0:00:07.931528235 14671 0x563426d5ded0 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,782229) 0:00:07.931538163 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta 0:00:07.931645047 14671 0x563426d5ded0 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 108, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 216 Class : 286 Label : Egyptian cat Probability : 0,782229 Classes : 1001 }, ], predictions : [ ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv2.tflite'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.664634195 16873 0x561f1782b850 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess 0:00:02.664702995 16873 0x561f1782b850 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,620495) 0:00:02.664716316 16873 0x561f1782b850 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta 0:00:02.665002849 16873 0x561f1782b850 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 32, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 64 Class : 286 Label : Egyptian cat Probability : 0,620495 Classes : 1001 }, ], predictions : [ ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv2.tflite'
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:07.851985949 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess 0:00:07.931498739 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess 0:00:07.931528235 14671 0x563426d5ded0 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,782229) 0:00:07.931538163 14671 0x563426d5ded0 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta 0:00:07.931645047 14671 0x563426d5ded0 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 108, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 216 Class : 286 Label : Egyptian cat Probability : 0,782229 Classes : 1001 }, ], predictions : [ ] }
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv2.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv3
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv3.tflite'
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.176072412 14946 0x557b199d4ed0 LOG inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess 0:00:02.176098336 14946 0x557b199d4ed0 LOG inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,883076) 0:00:02.176122466 14946 0x557b199d4ed0 LOG inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta 0:00:02.176226140 14946 0x557b199d4ed0 LOG inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 11, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 22 Class : 286 Label : Egyptian cat Probability : 0,883076 Classes : 1001 }, ], predictions : [ ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv3.tflite'
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.685245277 16898 0x55b256c93850 LOG inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess 0:00:02.685292515 16898 0x55b256c93850 LOG inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,972109) 0:00:02.685299510 16898 0x55b256c93850 LOG inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta 0:00:02.685411145 16898 0x55b256c93850 LOG inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 12, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 24 Class : 286 Label : Egyptian cat Probability : 0,972109 Classes : 1001 }, ], predictions : [ ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv3.tflite'
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.176072412 14946 0x557b199d4ed0 LOG inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess 0:00:02.176098336 14946 0x557b199d4ed0 LOG inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,883076) 0:00:02.176122466 14946 0x557b199d4ed0 LOG inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta 0:00:02.176226140 14946 0x557b199d4ed0 LOG inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 11, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 22 Class : 286 Label : Egyptian cat Probability : 0,883076 Classes : 1001 }, ], predictions : [ ] }
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv3.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Inceptionv4
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv4.tflite'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.039483790 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:209:gst_inceptionv4_preprocess:<net> Preprocess 0:00:02.382000009 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:221:gst_inceptionv4_postprocess_old:<net> Postprocess 0:00:02.382024685 14972 0x55c45dd99ed0 LOG inceptionv4 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,956486) 0:00:02.382030318 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:246:gst_inceptionv4_postprocess_new:<net> Postprocess Meta 0:00:02.382154899 14972 0x55c45dd99ed0 LOG inceptionv4 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 5, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 10 Class : 286 Label : Egyptian cat Probability : 0,956486 Classes : 1001 }, ], predictions : [ ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv4.tflite'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:06.223168928 16998 0x55a84d9ae850 LOG inceptionv4 gstinceptionv4.c:221:gst_inceptionv4_postprocess_old:<net> Postprocess 0:00:06.223196388 16998 0x55a84d9ae850 LOG inceptionv4 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,947655) 0:00:06.223202015 16998 0x55a84d9ae850 LOG inceptionv4 gstinceptionv4.c:246:gst_inceptionv4_postprocess_new:<net> Postprocess Meta 0:00:06.223294500 16998 0x55a84d9ae850 LOG inceptionv4 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 18, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 36 Class : 286 Label : Egyptian cat Probability : 0,947655 Classes : 1001 }, ], predictions : [ ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_inceptionv4.tflite'
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.039483790 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:209:gst_inceptionv4_preprocess:<net> Preprocess 0:00:02.382000009 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:221:gst_inceptionv4_postprocess_old:<net> Postprocess 0:00:02.382024685 14972 0x55c45dd99ed0 LOG inceptionv4 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,956486) 0:00:02.382030318 14972 0x55c45dd99ed0 LOG inceptionv4 gstinceptionv4.c:246:gst_inceptionv4_postprocess_new:<net> Postprocess Meta 0:00:02.382154899 14972 0x55c45dd99ed0 LOG inceptionv4 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 5, enabled : True, bbox : { x : 0 y : 0 width : 299 height : 299 }, classes : [ { Id : 10 Class : 286 Label : Egyptian cat Probability : 0,956486 Classes : 1001 }, ], predictions : [ ] }
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_inceptionv4.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ inceptionv4 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
MobileNetv2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_mobilenetv2.tflite'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.898115510 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:148:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:03.001354267 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:160:gst_mobilenetv2_postprocess_old:<net> Postprocess 0:00:03.001449439 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,761563) 0:00:03.001456716 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:185:gst_mobilenetv2_postprocess_new:<net> Postprocess Meta 0:00:03.001575055 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 28, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 56 Class : 286 Label : Egyptian cat Probability : 0,761563 Classes : 1001 }, ], predictions : [ ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_mobilenetv2.tflite'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.228695071 17037 0x5556ecccf850 LOG mobilenetv2 gstmobilenetv2.c:148:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:02.323426275 17037 0x5556ecccf850 LOG mobilenetv2 gstmobilenetv2.c:160:gst_mobilenetv2_postprocess_old:<net> Postprocess 0:00:02.323468784 17037 0x5556ecccf850 LOG mobilenetv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,640671) 0:00:02.323477263 17037 0x5556ecccf850 LOG mobilenetv2 gstmobilenetv2.c:185:gst_mobilenetv2_postprocess_new:<net> Postprocess Meta 0:00:02.323611426 17037 0x5556ecccf850 LOG mobilenetv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 21, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 42 Class : 286 Label : Egyptian cat Probability : 0,640671 Classes : 1001 }, ], predictions : [ ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_mobilenetv2.tflite'
GST_DEBUG=mobilenetv2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.898115510 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:148:gst_mobilenetv2_preprocess:<net> Preprocess 0:00:03.001354267 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:160:gst_mobilenetv2_postprocess_old:<net> Postprocess 0:00:03.001449439 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 286 : (0,761563) 0:00:03.001456716 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstmobilenetv2.c:185:gst_mobilenetv2_postprocess_new:<net> Postprocess Meta 0:00:03.001575055 15109 0x562a0b2e7ed0 LOG mobilenetv2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 28, enabled : True, bbox : { x : 0 y : 0 width : 224 height : 224 }, classes : [ { Id : 56 Class : 286 Label : Egyptian cat Probability : 0,761563 Classes : 1001 }, ], predictions : [ ] }
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_mobilenetv2.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
Resnet50v1
Image file
- Get the graph used on this example from RidgeRun Store
- You will need a image file from one of ImageNet classes
- Pipeline
IMAGE_FILE=cat.jpg LABELS='labels.txt' MODEL_LOCATION='graph_resnetv1.tflite'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:01.944768522 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.944803563 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051) 0:00:01.947003178 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.111978575 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.112000558 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051) 0:00:02.113091931 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.212289668 157 0xfccd90 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.212310188 157 0xfccd90 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 284 : (0.271051)
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of ImageNet classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_resnetv1.tflite'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:00.915688134 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:00.915709354 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.537144) 0:00:00.915747394 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.018904132 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.018924929 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.538948) 0:00:01.018976948 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.120286331 240 0x18cbc00 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.120306927 240 0x18cbc00 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 283 : (0.525331)
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_resnetv1.tflite'
GST_DEBUG=resnet50v1:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:01.842896607 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.842917966 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 425 : (0.048243) 0:00:01.842955409 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:01.948003304 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:01.948024035 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 611 : (0.065279) 0:00:01.948055304 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:145:gst_resnet50v1_preprocess:<net> Preprocess 0:00:02.052442770 294 0x14dd8a0 LOG resnet50v1 gstresnet50v1.c:157:gst_resnet50v1_postprocess:<net> Postprocess 0:00:02.052463202 294 0x14dd8a0 LOG resnet50v1 gstinferencedebug.c:73:gst_inference_print_highest_probability:<net> Highest probability is label 611 : (0.089816)
Visualization with classification overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_resnetv1.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ resnet50v1 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" style=2 font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
- Output
TinyYolov2
Image file
- Get the graph used on this example from RidgeRun Store
- You will need an image file from one of TinyYOLO classes
- Pipeline
IMAGE_FILE='cat.jpg' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov2.tflite'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:17.446720448 19333 0x55892c0ae770 LOG tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:17.548641827 19333 0x55892c0ae770 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess 0:00:17.548692764 19333 0x55892c0ae770 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:93,758068, y:71,626141, width:218,740955, height:334,471067, prob:10,713037] 0:00:17.548699121 19333 0x55892c0ae770 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta 0:00:17.548705539 19333 0x55892c0ae770 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1 0:00:17.548816856 19333 0x55892c0ae770 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 299, enabled : True, bbox : { x : 0 y : 0 width : 416 height : 416 }, classes : [ ], predictions : [ { id : 300, enabled : True, bbox : { x : 93 y : 71 width : 218 height : 334 }, classes : [ { Id : 280 Class : 14 Label : person Probability : 10,713037 Classes : 20 }, ], predictions : [ ] }, ] }
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of TinyYOLO classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov2.tflite'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:03.946822351 19396 0x55fc3118d680 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess 0:00:03.946899445 19396 0x55fc3118d680 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:62,124242, y:121,697849, width:215,944135, height:290,148073, prob:13,969749] 0:00:03.946905463 19396 0x55fc3118d680 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta 0:00:03.946912573 19396 0x55fc3118d680 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1 0:00:03.947079421 19396 0x55fc3118d680 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 58, enabled : True, bbox : { x : 0 y : 0 width : 416 height : 416 }, classes : [ ], predictions : [ { id : 59, enabled : True, bbox : { x : 62 y : 121 width : 215 height : 290 }, classes : [ { Id : 58 Class : 14 Label : person Probability : 13,969749 Classes : 20 }, ], predictions : [ ] }, ] }
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov2.tflite'
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:02.395698785 19446 0x555ab55d6770 LOG tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess 0:00:02.515331764 19446 0x555ab55d6770 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess 0:00:02.515377038 19446 0x555ab55d6770 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:97,778279, y:54,509112, width:229,643766, height:367,855935, prob:10,819336] 0:00:02.515401986 19446 0x555ab55d6770 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta 0:00:02.515411728 19446 0x555ab55d6770 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1 0:00:02.515541193 19446 0x555ab55d6770 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions: { id : 24, enabled : True, bbox : { x : 0 y : 0 width : 416 height : 416 }, classes : [ ], predictions : [ { id : 25, enabled : True, bbox : { x : 97 y : 54 width : 229 height : 367 }, classes : [ { Id : 18 Class : 14 Label : person Probability : 10,819336 Classes : 20 }, ], predictions : [ ] }, ] }
Visualization with detection overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov2.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! inferenceoverlay style=2 font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
- Output
TinyYolov3
Image file
- Get the graph used on this example from RidgeRun Store
- You will need an image file from one of TinyYOLO classes
- Pipeline
IMAGE_FILE='cat.jpg' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov3.tflite'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:06.401015400 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:06.817243785 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:06.817315935 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:06.817426814 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.236310555 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.236379100 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398] 0:00:07.236486242 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.659870194 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.659942388 12340 0x1317cf0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-55,170727, y:25,507316, width:396,182867, height:423,241143, prob:14,526398]
Video file
- Get the graph used on this example from RidgeRun Store
- You will need a video file from one of TinyYOLO classes
- Pipeline
VIDEO_FILE='cat.mp4' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov3.tflite'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:08.545063684 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:08.955522899 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:08.955600820 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-36,012765, y:-37,118160, width:426,351621, height:480,353663, prob:14,378592] 0:00:08.955824676 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:09.364908234 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:09.364970901 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-36,490694, y:-38,108817, width:427,474399, height:482,318385, prob:14,257683] 0:00:09.365090340 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:09.775848590 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:09.775932404 12504 0xce4400 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:7, x:-35,991940, y:-37,482425, width:426,533537, height:480,917142, prob:14,313076]
Camera stream
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' LABELS='labels.txt' MODEL_LOCATION='graph_tinyyolov3.tflite'
GST_DEBUG=tinyyolov3:6 gst-launch-1.0 \ v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)"
- Output
0:00:06.823064776 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.242114002 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.242183276 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:116,796387, y:-31,424289, width:240,876587, height:536,305261, prob:11,859128] 0:00:07.242293677 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:07.660324555 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:07.660388215 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:113,453324, y:-27,681194, width:248,010337, height:528,964842, prob:11,603928] 0:00:07.660503502 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:479:gst_tinyyolov3_preprocess:<net> Preprocess 0:00:08.079154860 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:501:gst_tinyyolov3_postprocess:<net> Postprocess 0:00:08.079230404 12678 0xec24a0 LOG tinyyolov3 gsttinyyolov3.c:384:print_top_predictions:<net> Box: [class:14, x:113,736444, y:-33,747251, width:246,987389, height:541,188374, prob:11,888664]
Visualization with detection overlay
- Get the graph used on this example from RidgeRun Store
- You will need a v4l2 compatible camera
- Pipeline
CAMERA='/dev/video0' MODEL_LOCATION='graph_tinyyolov3.tflite' LABELS='labels.txt'
gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ tinyyolov3 name=net model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" \ net.src_bypass ! inferenceoverlay style=2 font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
- Output