Coral from Google/GstInference/Example Pipelines: Difference between revisions
No edit summary |
No edit summary |
||
Line 3: | Line 3: | ||
</noinclude> | </noinclude> | ||
=Introduction= | |||
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location. | |||
The model for these pipelines can be downloaded from Google Coral store: | |||
* [https://github.com/google-coral/test_data/raw/master/mobilenet_v2_1.0_224_quant_edgetpu.tflite MobilenetV2 (model)] | |||
And for the labels file you may use the one that comes for the tensorflow backend in RidgeRun store: | |||
* [https://shop.ridgerun.com/products/mobilenetv2-for-tensorflow MobilenetV2 (labels)] | |||
Once you have downloaded them, unzip them and test your preferred pipeline from the list below. | |||
=Dev Board= | |||
== Classification: MobilenetV2 == | |||
=== Camera Source === | |||
For these pipelines you can modify the CAMERA variable according to your device. | |||
'''Display Output''' | |||
<pre style="background:#D6E4F1"> | |||
CAMERA='/dev/video1' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
gst-launch-1.0 \ | |||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
waylandsink fullscreen=false sync=false | |||
</pre> | |||
'''Recording Output''' | |||
<pre style="background:#D6E4F1"> | |||
CAMERA='/dev/video1' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
OUTPUT_FILE='recording.mpeg' | |||
gst-launch-1.0 \ | |||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e | |||
</pre> | |||
''' Streaming Output''' | |||
Remember to modify the HOST and PORT variables according to your own needs. | |||
* Processing side | |||
<pre style="background:#D6E4F1"> | |||
CAMERA='/dev/video1' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
HOST='192.168.0.17' | |||
PORT='5000' | |||
gst-launch-1.0 \ | |||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ | |||
udpsink host=$HOST port=$PORT sync=false | |||
</pre> | |||
* Client side | |||
<pre style="background:#e4d2fa"> | |||
PORT='5000' | |||
gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e | |||
</pre> | |||
=== File Source === | |||
For these pipelines you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model. | |||
'''Display Output''' | |||
<pre style="background:#D6E4F1"> | |||
VIDEO_FILE='animals.mp4' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
gst-launch-1.0 \ | |||
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
waylandsink fullscreen=false sync=false | |||
</pre> | |||
'''Recording Output''' | |||
You can modify the OUTPUT_FILE variable to the name you want for your recording. | |||
<pre style="background:#D6E4F1"> | |||
VIDEO_FILE='animals.mp4' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
OUTPUT_FILE='recording.mpeg' | |||
gst-launch-1.0 \ | |||
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e | |||
</pre> | |||
''' Streaming Output''' | |||
Remember to modify the HOST and PORT variables according to your own needs. | |||
* Processing side | |||
<pre style="background:#D6E4F1"> | |||
VIDEO_FILE='animals.mp4' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
HOST='192.168.0.17' | |||
PORT='5000' | |||
gst-launch-1.0 \ | |||
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ | |||
udpsink host=$HOST port=$PORT sync=false | |||
</pre> | |||
* Client side | |||
<pre style="background:#e4d2fa"> | |||
PORT='5000' | |||
gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e | |||
</pre> | |||
=== RTSP Source === | |||
For these pipelines you may modify the RTSP_URI variable according to your needs. | |||
'''Display Output''' | |||
<pre style="background:#D6E4F1"> | |||
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
gst-launch-1.0 \ | |||
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
waylandsink fullscreen=false sync=false | |||
</pre> | |||
'''Recording Output''' | |||
You can modify the OUTPUT_FILE variable to the name you want for your recording. | |||
<pre style="background:#D6E4F1"> | |||
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
OUTPUT_FILE='recording.mpeg' | |||
gst-launch-1.0 \ | |||
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ | |||
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e | |||
</pre> | |||
''' Streaming Output''' | |||
Remember to modify the HOST and PORT variables according to your own needs. | |||
* Processing side | |||
<pre style="background:#D6E4F1"> | |||
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' | |||
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' | |||
INPUT_LAYER='input' | |||
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' | |||
LABELS='imagenet_labels.txt' | |||
HOST='192.168.0.17' | |||
PORT='5000' | |||
gst-launch-1.0 \ | |||
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ | |||
t. ! videoscale ! queue ! net.sink_model \ | |||
t. ! queue ! net.sink_bypass \ | |||
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ | |||
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ | |||
udpsink host=$HOST port=$PORT sync=false | |||
</pre> | |||
* Client side | |||
<pre style="background:#e4d2fa"> | |||
PORT='5000' | |||
gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e | |||
</pre> | |||
== Detection: MobilenetSSD v2 == | |||
= USB Accelerator = | |||
Revision as of 19:06, 18 February 2021
Coral from Google |
---|
|
Introduction |
GStreamer |
GstInference |
Camera Drivers |
Reference Documentation |
Contact Us |
Introduction
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.
The model for these pipelines can be downloaded from Google Coral store:
And for the labels file you may use the one that comes for the tensorflow backend in RidgeRun store:
Once you have downloaded them, unzip them and test your preferred pipeline from the list below.
Dev Board
Classification: MobilenetV2
Camera Source
For these pipelines you can modify the CAMERA variable according to your device.
Display Output
CAMERA='/dev/video1' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ waylandsink fullscreen=false sync=false
Recording Output
CAMERA='/dev/video1' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' OUTPUT_FILE='recording.mpeg' gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e
Streaming Output
Remember to modify the HOST and PORT variables according to your own needs.
- Processing side
CAMERA='/dev/video1' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' HOST='192.168.0.17' PORT='5000' gst-launch-1.0 \ v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ udpsink host=$HOST port=$PORT sync=false
- Client side
PORT='5000' gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e
File Source
For these pipelines you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the imagenet_labels.txt from the downloaded model.
Display Output
VIDEO_FILE='animals.mp4' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ waylandsink fullscreen=false sync=false
Recording Output
You can modify the OUTPUT_FILE variable to the name you want for your recording.
VIDEO_FILE='animals.mp4' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' OUTPUT_FILE='recording.mpeg' gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e
Streaming Output
Remember to modify the HOST and PORT variables according to your own needs.
- Processing side
VIDEO_FILE='animals.mp4' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' HOST='192.168.0.17' PORT='5000' gst-launch-1.0 \ filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ udpsink host=$HOST port=$PORT sync=false
- Client side
PORT='5000' gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e
RTSP Source
For these pipelines you may modify the RTSP_URI variable according to your needs.
Display Output
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' gst-launch-1.0 \ rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ waylandsink fullscreen=false sync=false
Recording Output
You can modify the OUTPUT_FILE variable to the name you want for your recording.
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' OUTPUT_FILE='recording.mpeg' gst-launch-1.0 \ rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \ avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e
Streaming Output
Remember to modify the HOST and PORT variables according to your own needs.
- Processing side
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa' MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite' INPUT_LAYER='input' OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1' LABELS='imagenet_labels.txt' HOST='192.168.0.17' PORT='5000' gst-launch-1.0 \ rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \ t. ! videoscale ! queue ! net.sink_model \ t. ! queue ! net.sink_bypass \ mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \ net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \ udpsink host=$HOST port=$PORT sync=false
- Client side
PORT='5000' gst-launch-1.0 udpsrc port=$PORT ! queue ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e
Detection: MobilenetSSD v2
USB Accelerator