20,290
edits
mNo edit summary |
mNo edit summary |
||
Line 31: | Line 31: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | ||
Line 62: | Line 64: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | ||
Line 84: | Line 88: | ||
=== Inceptionv4 inference on camera stream using TensorFlow === | === Inceptionv4 inference on camera stream using TensorFlow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
Line 94: | Line 98: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \ | nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \ | ||
Line 107: | Line 113: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | ||
Line 132: | Line 140: | ||
===Inceptionv4 visualization with classification overlay Tensorflow=== | ===Inceptionv4 visualization with classification overlay Tensorflow=== | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 141: | Line 149: | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
LABELS='imagenet_labels.txt' | LABELS='imagenet_labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \ | nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \ | ||
Line 157: | Line 167: | ||
OUTPUT_LAYER='InceptionV4/Logits/Predictions' | OUTPUT_LAYER='InceptionV4/Logits/Predictions' | ||
LABELS='imagenet_labels.txt' | LABELS='imagenet_labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | ||
Line 164: | Line 176: | ||
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false | net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false | ||
</syntaxhighlight> | </syntaxhighlight> | ||
* Output | * Output | ||
Line 178: | Line 189: | ||
INPUT_LAYER='input/Placeholder' | INPUT_LAYER='input/Placeholder' | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | ||
Line 196: | Line 209: | ||
0:00:07.662473455 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess | 0:00:07.662473455 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess | ||
0:00:07.662769998 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609] | 0:00:07.662769998 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609] | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 208: | Line 220: | ||
INPUT_LAYER='input/Placeholder' | INPUT_LAYER='input/Placeholder' | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | ||
Line 230: | Line 244: | ||
=== TinyYolov2 inference on camera stream using Tensorflow === | === TinyYolov2 inference on camera stream using Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 238: | Line 252: | ||
INPUT_LAYER='input/Placeholder' | INPUT_LAYER='input/Placeholder' | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \ | nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \ | ||
Line 250: | Line 266: | ||
INPUT_LAYER='input/Placeholder' | INPUT_LAYER='input/Placeholder' | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | ||
Line 272: | Line 290: | ||
=== TinyYolov2 visualization with detection overlay Tensorflow === | === TinyYolov2 visualization with detection overlay Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 281: | Line 299: | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 \ | GST_DEBUG=tinyyolov2:6 \ | ||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
Line 298: | Line 318: | ||
OUTPUT_LAYER='add_8' | OUTPUT_LAYER='add_8' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | ||
Line 305: | Line 327: | ||
net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false | net.src_bypass ! videoconvert ! detectionoverlay labels="$(cat $LABELS)" font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false | ||
</syntaxhighlight> | </syntaxhighlight> | ||
* Output | * Output | ||
Line 312: | Line 333: | ||
=== FaceNet visualization with embedding overlay Tensorflow === | === FaceNet visualization with embedding overlay Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
* LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings. | * LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings. | ||
==== | ====NVIDIA Camera ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 324: | Line 345: | ||
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | ||
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \ | nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \ | ||
Line 341: | Line 364: | ||
LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | ||
EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | ||
Line 362: | Line 387: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | ||
Line 380: | Line 407: | ||
0:02:22.678740356 30355 0x5accf0 LOG inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess | 0:02:22.678740356 30355 0x5accf0 LOG inceptionv4 gstinceptionv4.c:232:gst_inceptionv4_postprocess:<net> Postprocess | ||
0:02:22.678892356 30355 0x5accf0 LOG inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314) | 0:02:22.678892356 30355 0x5accf0 LOG inceptionv4 gstinceptionv4.c:253:gst_inceptionv4_postprocess:<net> Highest probability is label 282 : (0.627314) | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 392: | Line 418: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | ||
Line 414: | Line 442: | ||
=== Inceptionv4 inference on camera stream using TensorFlow-Lite === | === Inceptionv4 inference on camera stream using TensorFlow-Lite === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
Line 423: | Line 451: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \ | nvcamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! queue ! net.sink_model \ | ||
Line 435: | Line 465: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | GST_DEBUG=inceptionv4:6 gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | ||
Line 460: | Line 492: | ||
===Inceptionv4 visualization with classification overlay TensorFlow-Lite=== | ===Inceptionv4 visualization with classification overlay TensorFlow-Lite=== | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow-lite this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 467: | Line 499: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \ | nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM)' ! tee name=t \ | ||
Line 481: | Line 515: | ||
MODEL_LOCATION='graph_inceptionv4.tflite' | MODEL_LOCATION='graph_inceptionv4.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | ||
Line 488: | Line 524: | ||
net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false | net.src_bypass ! videoconvert ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false | ||
</syntaxhighlight> | </syntaxhighlight> | ||
* Output | * Output | ||
Line 501: | Line 536: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | multifilesrc location=$IMAGE_FILE ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue ! net.sink_model \ | ||
Line 519: | Line 556: | ||
0:00:07.662473455 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess | 0:00:07.662473455 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:501:gst_tinyyolov2_postprocess:<net> Postprocess | ||
0:00:07.662769998 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609] | 0:00:07.662769998 30513 0x5accf0 LOG tinyyolov2 gsttinyyolov2.c:384:print_top_predictions:<net> Box: [class:7, x:25.820670, y:11.977936, width:425.495203, height:450.224357, prob:15.204609] | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 530: | Line 566: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | filesrc location=$VIDEO_FILE ! qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue ! net.sink_model \ | ||
Line 552: | Line 590: | ||
=== TinyYolov2 inference on camera stream using TensorFlow-Lite === | === TinyYolov2 inference on camera stream using TensorFlow-Lite === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 559: | Line 597: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \ | nvarguscamerasrc sensor-id=$SENSOR_ID ! nvvidconv ! 'video/x-raw,format=BGRx' ! queue ! net.sink_model \ | ||
Line 570: | Line 610: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \ | ||
Line 592: | Line 634: | ||
=== TinyYolov2 visualization with detection overlay TensorFlow-Lite === | === TinyYolov2 visualization with detection overlay TensorFlow-Lite === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow-lite this link] | ||
* You will need a camera compatible with | * You will need a camera compatible with NVIDIA Libargus API or V4l2. | ||
==== | ====NVIDIA Camera ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 599: | Line 641: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
GST_DEBUG=tinyyolov2:6 \ | GST_DEBUG=tinyyolov2:6 \ | ||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
Line 614: | Line 658: | ||
MODEL_LOCATION='graph_tinyyolov2.tflite' | MODEL_LOCATION='graph_tinyyolov2.tflite' | ||
LABELS='labels.txt' | LABELS='labels.txt' | ||
</syntaxhighlight> | |||
<syntaxhighlight lang=bash> | |||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ |