|
|
Line 1: |
Line 1: |
| | <noinclude> |
| | {{GstInference/Head|previous=Example pipelines|next=Example pipelines/NANO|title=GstInference GStreamer pipelines on PC}} |
| | </noinclude> |
| | <!-- If you want a custom title for the page, un-comment and edit this line: |
| | {{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}} |
| | --> |
| | = Sample pipelines = |
| | The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element. |
|
| |
|
| == Tensorflow ==
| | The required elements are: |
| | * Backend |
| | * Model |
| | * Model location |
| | * Labels |
| | * Source |
| | * Sink |
|
| |
|
| === Inceptionv1 ===
| | The optional elements include: |
| | * inferencefilter |
| | * inferencrop |
| | * inferenceoverlay |
|
| |
|
| ==== Image file ====
| | [[File:Inference example.png|1000px|thumb|center|Detection with new metadata]] |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
| |
| * You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| IMAGE_FILE=cat.jpg
| |
| MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
| |
| LABELS='imagenet_labels.txt'
| |
| GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
| |
| multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
| |
| inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
|
| |
|
| 0:00:00.626529976 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
| | <html> |
| 0:00:00.643145025 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
| | <head> |
| 0:00:00.643180120 6700 0x55a306b258a0 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3804 : (4.191162)
| | <meta name="viewport" content="width=device-width, initial-scale=1"> |
| 0:00:00.643186095 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
| | <style> |
| 0:00:00.643211153 6700 0x55a306b258a0 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
| | * { |
| { | | box-sizing: border-box; |
| id : 7, | |
| enabled : True,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 224
| |
| height : 224
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 14
| |
| Class : 3804
| |
| Label : (null)
| |
| Probability : 4.191162
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| } | | } |
|
| |
|
| </syntaxhighlight>
| | .button { |
| | background-color: #008CBA; |
| | border: none; |
| | color: white; |
| | padding: 15px 32px; |
| | text-align: center; |
| | text-decoration: none; |
| | display: inline-block; |
| | font-size: 16px; |
| | margin: 4px 2px; |
| | cursor: pointer; |
| | } |
|
| |
|
| ==== Video file ==== | | input[type=text], select, textarea { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
| | width: 100%; |
| * You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| | padding: 12px; |
| * Pipeline
| | border: 1px solid #ccc; |
| <syntaxhighlight lang=bash>
| | border-radius: 4px; |
| VIDEO_FILE='cat.mp4'
| | resize: vertical; |
| MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
| | } |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
| |
| LABELS='imagenet_labels.txt'
| |
|
| |
|
| GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
| | label { |
| filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
| | padding: 12px 12px 12px 0; |
| inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| | display: inline-block; |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:00.881389256 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
| |
| 0:00:00.898481750 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
| |
| 0:00:00.898515118 6700 0x55a306b258a0 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 1016 : (4.182041)
| |
| 0:00:00.898521200 6700 0x55a306b258a0 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
| |
| 0:00:00.898546079 6700 0x55a306b258a0 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| { | |
| id : 22, | |
| enabled : True,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 224
| |
| height : 224
| |
| },
| |
| classes : [ | |
| {
| |
| Id : 44
| |
| Class : 1016
| |
| Label : (null)
| |
| Probability : 4.182041
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| } | | } |
| </syntaxhighlight>
| |
|
| |
|
| ==== Camera stream ==== | | input[type=submit] { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
| | background-color: #4CAF50; |
| * You will need a v4l2 compatible camera
| | color: white; |
| * Pipeline
| | padding: 12px 20px; |
| <syntaxhighlight lang=bash>
| | border: none; |
| CAMERA='/dev/video0'
| | border-radius: 4px; |
| MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
| | cursor: pointer; |
| INPUT_LAYER='input'
| | float: right; |
| OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
| | } |
| LABELS='imagenet_labels.txt'
| |
|
| |
|
| GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
| | input[type=submit]:hover { |
| v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
| | background-color: #45a049; |
| inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:03.858432794 6899 0x558a68bf0e80 LOG inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
| |
| 0:00:03.875012119 6899 0x558a68bf0e80 LOG inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
| |
| 0:00:03.875053519 6899 0x558a68bf0e80 LOG inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3022 : (9897291000005358165649701398904832.000000)
| |
| 0:00:03.875061545 6899 0x558a68bf0e80 LOG inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
| |
| 0:00:03.875089371 6899 0x558a68bf0e80 LOG inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| { | |
| id : 93, | |
| enabled : True,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 224
| |
| height : 224
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 186
| |
| Class : 3022
| |
| Label : (null)
| |
| Probability : 9897291000005358165649701398904832.000000
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| } | | } |
| </syntaxhighlight>
| |
|
| |
|
| ==== Visualization with inference overlay ====
| | .container { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
| | border-radius: 5px; |
| * You will need a v4l2 compatible camera
| | background-color: lightcyan; |
| * Pipeline
| | padding: 20px; |
| <syntaxhighlight lang=bash>CAMERA='/dev/video0'
| | } |
| MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
| |
| LABELS='imagenet_labels.txt'
| |
| gst-launch-1.0 \
| |
| v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
| |
| t. ! videoscale ! queue ! net.sink_model \
| |
| t. ! queue ! net.sink_bypass \
| |
| inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
| |
| net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| |
| </syntaxhighlight>
| |
|
| |
|
| === Inceptionv2 ===
| | .col-25 { |
| | float: left; |
| | width: 25%; |
| | margin-top: 6px; |
| | } |
|
| |
|
| ==== Image file ====
| | .col-50 { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
| | float: left; |
| * You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| | width: 50%; |
| * Pipeline
| | margin-top: 6px; |
| <syntaxhighlight lang=bash>
| | } |
| IMAGE_FILE=cat.jpg
| |
| MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='Softmax'
| |
|
| |
|
| GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
| | .col-75 { |
| multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
| | float: left; |
| inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| | width: 75%; |
| </syntaxhighlight>
| | margin-top: 6px; |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:01.167111306 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
| |
| 0:00:01.190633209 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
| |
| 0:00:01.190667056 12853 0x55bc0eeb9770 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
| |
| 0:00:01.190673102 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
| |
| 0:00:01.190699590 12853 0x55bc0eeb9770 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| { | |
| id : 23, | |
| enabled : True,
| |
| bbox : { | |
| x : 0
| |
| y : 0
| |
| width : 224
| |
| height : 224
| |
| }, | |
| classes : [
| |
| {
| |
| Id : 46
| |
| Class : 2058
| |
| Label : (null)
| |
| Probability : 33799702613643740784668592694586507264.000000
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| } | | } |
|
| |
|
| </syntaxhighlight>
| | /* Clear floats after the columns */ |
| | .row:after { |
| | content: ""; |
| | display: table; |
| | clear: both; |
| | } |
|
| |
|
| ==== Video file ====
| | /* Responsive layout - when the screen is less than 600px wide, make the two columns stack on top of each other instead of next to each other */ |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link] | | @media screen and (max-width: 600px) { |
| * You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| | .col-25, .col-50, .col-75, input[type=submit] { |
| * Pipeline
| | width: 100%; |
| <syntaxhighlight lang=bash>
| | margin-top: 0; |
| VIDEO_FILE='cat.mp4'
| | } |
| MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
| | } |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='Softmax'
| |
| LABELS='imagenet_labels.txt'
| |
|
| |
|
| GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
| | // Material Select Initialization |
| filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
| | $(document).ready(function() { |
| inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| | $('.mdb-select').materialSelect(); |
| </syntaxhighlight>
| | }); |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:01.167111306 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
| |
| 0:00:01.190633209 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
| |
| 0:00:01.190667056 12853 0x55bc0eeb9770 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
| |
| 0:00:01.190673102 12853 0x55bc0eeb9770 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
| |
| 0:00:01.190699590 12853 0x55bc0eeb9770 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| {
| |
| id : 23,
| |
| enabled : True,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 224
| |
| height : 224
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 46
| |
| Class : 2058
| |
| Label : (null)
| |
| Probability : 33799702613643740784668592694586507264.000000
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| } | |
|
| |
|
| </syntaxhighlight> | | </style> |
| | </head> |
| | <body> |
|
| |
|
| ==== Camera stream ====
| | <h2>Pipeline generator</h2> |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
| | <p>The following tool will provide simple pipelines according to the selected elements.</p> |
| * You will need a v4l2 compatible camera
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| CAMERA='/dev/video0'
| |
| MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='Softmax'
| |
| LABELS='imagenet_labels.txt'
| |
|
| |
|
| GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
| | <div class="container"> |
| v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
| | <form action="/action_page.php"> |
| inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| | <div class="row"> |
| </syntaxhighlight> | | <div class="col-25"> |
| * Output
| | <label for="backend">Backend</label> |
| <syntaxhighlight lang=bash> | | </div> |
| 0:00:01.647715258 12963 0x55be7ee48a80 LOG inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
| | <div class="col-75"> |
| 0:00:01.673402231 12963 0x55be7ee48a80 LOG inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
| | <select id="backend" name="backend" onchange="backend_selection()"> |
| 0:00:01.673436695 12963 0x55be7ee48a80 LOG inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3364 : (3.972995)
| | <option value="" disabled selected>Select your backend</option> |
| 0:00:01.673445162 12963 0x55be7ee48a80 LOG inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
| | <option value="ncsdk">NCSDK</option> |
| 0:00:01.673476625 12963 0x55be7ee48a80 LOG inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
| | <option value="tensorflow">TensorFlow</option> |
| {
| | <option value="tflite">TFLite</option> |
| id : 26,
| | </select> |
| enabled : True,
| | </div> |
| bbox : {
| | </div> |
| x : 0 | | <div class="row"> |
| y : 0 | | <div class="col-25"> |
| width : 224
| | <label for="model_menu">Model</label> |
| height : 224 | | </div> |
| }, | | <div class="col-75"> |
| classes : [ | | <select id="model" name="model" onchange="model_selection()"> |
| { | | <option value="" disabled selected>Select your architecture</option> |
| Id : 52 | | <option value="inceptionv1">Inceptionv1</option> |
| Class : 3364 | | <option value="inceptionv2">Inceptionv2</option> |
| Label : (null) | | <option value="inceptionv3">Inceptionv3</option> |
| Probability : 3.972995 | | <option value="inceptionv4">Inceptionv4</option> |
| Classes : 4004 | | <option value="mobilenetv2">MobileNetv2</option> |
| }, | | <option value="resnet50v1">Resnet50v1</option> |
| ],
| | <option value="tinyyolov2">TinyYolov2</option> |
| predictions : [
| | <option value="tinyyolov3">TinyYolov3</option> |
| | | <option value="facenetv1">FaceNet</option> |
| ] | | </select> |
| }
| | </div> |
| </syntaxhighlight> | | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="input_layer">Input layer</label> |
| | </div> |
| | <div class="col-75"> |
| | <input type="text" id="inputlayer" name="inputlayer" placeholder="Input layer.."> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="output_layer">Output layer</label> |
| | </div> |
| | <div class="col-75"> |
| | <input type="text" id="outputlayer" name="outputlayer" placeholder="Output layer.."> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="output_layer">Model location</label> |
| | </div> |
| | <div class="col-75"> |
| | <input type="text" id="model_location" name="model_location" placeholder="Path to model.."> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="output_layer">Labels</label> |
| | </div> |
| | <div class="col-75"> |
| | <input type="text" id="labels" placeholder="Path to labels file.."> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="input_source">Source</label> |
| | </div> |
| | <div class="col-25"> |
| | <select id="source" name="source"> |
| | <option value="" disabled selected>Select your source</option> |
| | <option value=" multifilesrc">Image file</option> |
| | <option value=" filesrc">Video file</option> |
| | <option value=" v4l2src">Camera stream</option> |
| | </select> |
| | </div> |
| | <div class="col-50"> |
| | <input type="text" id="source_location" placeholder="Source location"> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <label for="sink">Sink</label> |
| | </div> |
| | <div class="col-75"> |
| | <select id="sink"> |
| | <option value="" disabled selected>Select your sink</option> |
| | <option value=" fakesink" >fakesink</option> |
| | <option value=" xvimagesink">xvimagesink</option> |
| | </select> |
| | </div> |
| | </div> |
| | <h3>Optional utilites</h3> |
| | <p>The following elements are optional yet very useful. Check the documentation for more details on their properties.</p> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <input type="checkbox" id="en_inferencefilter" onchange="enable_filter()"> |
| | <label for="en_inferencefilter">Inference Filter</label> |
| | </div> |
| | <div class="col-75"> |
| | <input type="text" id="filter_class_id" placeholder="Filter class id.." disabled="true"> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <input type="checkbox" id="en_inference_overlay" onchange="enable_overlay()"> |
| | <label for="en_inference_overlay">Inference Overlay</label> |
| | </div> |
| | <div class="col-25"> |
| | <input type="text" id="thickness" placeholder="thickness" disabled="true"> |
| | </div> |
| | <div class="col-25"> |
| | <input type="text" id="fontscale" placeholder="fontscale" disabled="true"> |
| | </div> |
| | <div class="col-25"> |
| | <select id="style" name="style" disabled=true> |
| | <option value="" disabled selected>Pick a style</option> |
| | <option value="0">Classic</option> |
| | <option value="1">Dotted</option> |
| | <option value="2">Dashed</option> |
| | </select> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <div class="col-25"> |
| | <input type="checkbox" id="en_inferencecrop" onchange="enable_crop()"> |
| | <label for="en_inferencecrop">Inference Crop</label> |
| | </div> |
| | <div class="col-25"> |
| | <input type="text" id="aspect_ratio" placeholder="Aspect ratio" disabled="true"> |
| | </div> |
| | </div> |
| | <div class="row"> |
| | <input type="reset" class="button" value="Reset"/> |
| | <button type="button" class="button" onclick="print()">Generate!</button> |
| | </div> |
| | </form> |
| | </div> |
|
| |
|
| ==== Visualization with inference overlay ====
| | <!-- ****************************************************** --> |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
| |
| * You will need a v4l2 compatible camera | |
| * Pipeline | |
| <syntaxhighlight lang=bash>CAMERA='/dev/video0'
| |
| MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='Softmax'
| |
| LABELS='imagenet_labels.txt'
| |
|
| |
|
| gst-launch-1.0 \
| | <script> |
| v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
| | var src = ""; |
| t. ! videoscale ! queue ! net.sink_model \
| | var model = ""; |
| t. ! queue ! net.sink_bypass \
| | var model_props = ""; |
| inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
| | var tee = ""; |
| net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| | var filter = ""; |
| </syntaxhighlight>
| | var crop = ""; |
| | var overlay = ""; |
| | var sink = ""; |
|
| |
|
| === Inceptionv3 === | | var input_layers = { |
| | inceptionv1: "input", |
| | inceptionv2: "input", |
| | inceptionv3: "input", |
| | inceptionv4: "input", |
| | mobilenetv2: "input", |
| | resnet50v1: "input_tensor", |
| | tinyyolov2: "input/Placeholder", |
| | tinyyolov3: "inputs", |
| | facenetv1: "input" |
| | }; |
|
| |
|
| ==== Image file ====
| |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
| |
| * You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| IMAGE_FILE=cat.jpg
| |
| MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
| |
| GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
| |
| multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
| |
| inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:01.696274261 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:149:gst_inceptionv3_preprocess:<net> Preprocess
| |
| 0:00:01.751348188 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess
| |
| 0:00:01.751379427 13153 0x55c06386e8a0 LOG inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 1353 : (9.000000)
| |
| 0:00:01.751385353 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta
| |
| 0:00:01.751511065 13153 0x55c06386e8a0 LOG inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| {
| |
| id : 16,
| |
| enabled : True,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 299
| |
| height : 299
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 32
| |
| Class : 1353
| |
| Label : (null)
| |
| Probability : 9.000000
| |
| Classes : 4004
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| }
| |
|
| |
|
| </syntaxhighlight>
| | var output_layers = { |
| | inceptionv1: "InceptionV1/Logits/Predictions/Reshape_1", |
| | inceptionv2: "Softmax", |
| | inceptionv3: "InceptionV3/Predictions/Reshape_1", |
| | inceptionv4: "InceptionV4/Logits/Predictions", |
| | mobilenetv2: "MobilenetV2/Predictions/Reshape_1", |
| | resnet50v1: "softmax_tensor", |
| | tinyyolov2: "add_8", |
| | tinyyolov3: "output_boxes", |
| | facenetv1: "output" |
| | }; |
|
| |
|
| ==== Video file ==== | | var model_names = { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
| | inceptionv1: "graph_inceptionv1_tensorflow.pb", |
| * You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
| | inceptionv2: "graph_inceptionv2_tensorflow.pb", |
| * Pipeline
| | inceptionv3: "graph_inceptionv3_tensorflow.pb", |
| <syntaxhighlight lang=bash>
| | inceptionv4: "graph_inceptionv4_tensorflow.pb", |
| VIDEO_FILE='cat.mp4'
| | mobilenetv2: "graph_mobilenetv2_tensorflow.pb", |
| MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
| | resnet50v1: "graph_resnetv1_tensorflow.pb", |
| INPUT_LAYER='input'
| | tinyyolov2: "graph_tinyyolov2_tensorflow.pb", |
| OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
| | tinyyolov3: "graph_tinyyolov3_tensorflow.pb", |
| GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
| | facenetv1: "graph_facenetv1_tensorflow.pb" |
| filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
| | }; |
| inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:01.643494169 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:149:gst_inceptionv3_preprocess:<net> Preprocess
| |
| 0:00:01.696036720 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess
| |
| 0:00:01.696072019 13153 0x55c06386e8a0 LOG inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2543 : (5.398693)
| |
| 0:00:01.696079025 13153 0x55c06386e8a0 LOG inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta
| |
| 0:00:01.696208280 13153 0x55c06386e8a0 LOG inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| {
| |
| id : 15,
| |
| enabled : True, | |
| bbox : { | |
| x : 0
| |
| y : 0
| |
| width : 299
| |
| height : 299
| |
| },
| |
| classes : [ | |
| {
| |
| Id : 30
| |
| Class : 2543
| |
| Label : (null)
| |
| Probability : 5.398693
| |
| Classes : 4004
| |
| },
| |
| ], | |
| predictions : [
| |
|
| |
| ]
| |
| } | |
|
| |
|
| </syntaxhighlight>
| | var label_files = { |
| | inceptionv1: "imagenet_labels.txt", |
| | inceptionv2: "imagenet_labels.txt", |
| | inceptionv3: "imagenet_labels.txt", |
| | inceptionv4: "imagenet_labels.txt", |
| | mobilenetv2: "imagenet_labels.txt", |
| | resnet50v1: "imagenet_labels.txt", |
| | tinyyolov2: "labels.txt", |
| | tinyyolov3: "labels_ty3.txt", |
| | facenetv1: "imagenet_labels.txt" |
| | }; |
|
| |
|
| ==== Camera stream ====
| | function model_selection() { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
| |
| * You will need a v4l2 compatible camera
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| CAMERA='/dev/video0'
| |
| MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
| |
| INPUT_LAYER='input'
| |
| OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
| |
| GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
| |
| v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
| |
| inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| |
| </syntaxhighlight>
| |
| * Output
| |
| <syntaxhighlight lang=bash>
| |
| 0:00:14.614862363 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
| |
| 0:00:15.737842669 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
| |
| 0:00:15.737912053 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,105199)
| |
| 0:00:15.738007534 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
| |
| 0:00:16.855603761 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
| |
| 0:00:16.855673578 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,093981)
| |
| 0:00:16.855768558 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
| |
| 0:00:17.980784789 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
| |
| 0:00:17.980849612 27227 0x19cd4a0 LOG inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,077824)
| |
| </syntaxhighlight>
| |
|
| |
|
| ==== Visualization with inference overlay ==== | | // Default values from dictionary for Tensorflow models |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
| | if( document.getElementById("backend").value == "tensorflow") { |
| * You will need a v4l2 compatible camera
| | document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value]; |
| * Pipeline
| | document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value]; |
| <syntaxhighlight lang=bash>
| | document.getElementById("inputlayer").disabled=false; |
| CAMERA='/dev/video0'
| | document.getElementById("outputlayer").disabled=false; |
| MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
| | } |
| INPUT_LAYER='input'
| | else { |
| OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
| | document.getElementById("inputlayer").disabled=true; |
| LABELS='imagenet_labels.txt'
| | document.getElementById("inputlayer").value=null; |
| gst-launch-1.0 \
| | document.getElementById("outputlayer").disabled=true; |
| v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
| | document.getElementById("outputlayer").value=null; |
| t. ! videoscale ! queue ! net.sink_model \
| | } |
| t. ! queue ! net.sink_bypass \
| | document.getElementById("model_location").value = model_names[document.getElementById("model").value]; |
| inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
| | document.getElementById("labels").value = label_files[document.getElementById("model").value]; |
| net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| | } |
| </syntaxhighlight>
| |
|
| |
|
| === TinyYolov2 ===
| |
|
| |
|
| ==== Image file ==== | | function crop_selection() { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
| | crop = " inferencecrop aspect-ratio=" + document.getElementById("aspect_ratio").value + " ! "; |
| * You will need an image file from one of TinyYOLO classes
| | } |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| IMAGE_FILE=/path/to/image
| |
| MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
| |
| INPUT_LAYER='input/Placeholder'
| |
| OUTPUT_LAYER='add_8'
| |
| LABELS=labels.txt
| |
|
| |
|
| GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
| | function enable_overlay() { |
| multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| | var checkbox = document.getElementById('en_inference_overlay'); |
| </syntaxhighlight>
| | if(checkbox.checked == true) { |
| * Output
| | document.getElementById("thickness").disabled=false; |
| <syntaxhighlight lang=bash>
| | document.getElementById("style").disabled=false; |
| 0:00:03.050336570 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
| | document.getElementById("fontscale").disabled=false; |
| 0:00:03.097045162 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
| | overlay = " inferenceoverlay"; |
| 0:00:03.097080665 8194 0x55b131f7aad0 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
| | } else { |
| 0:00:03.097087457 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
| | document.getElementById("thickness").disabled=true; |
| 0:00:03.097095173 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
| | document.getElementById("style").disabled=true; |
| 0:00:03.097117947 8194 0x55b131f7aad0 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
| | document.getElementById("fontscale").disabled=true; |
| {
| | document.getElementById("thickness").value = null; |
| id : 346,
| | document.getElementById("style").value = null; |
| enabled : True,
| | document.getElementById("fontscale").value = null; |
| bbox : {
| | overlay = ""; |
| x : 0 | | } |
| y : 0
| |
| width : 416
| |
| height : 416
| |
| }, | |
| classes : [
| |
|
| |
| ],
| |
| predictions : [
| |
| {
| |
| id : 347,
| |
| enabled : True,
| |
| bbox : {
| |
| x : 87
| |
| y : 102
| |
| width : 244
| |
| height : 285
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 258
| |
| Class : 7
| |
| Label : cat
| |
| Probability : 16.271288
| |
| Classes : 20
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| },
| |
| ]
| |
| } | | } |
| </syntaxhighlight>
| |
|
| |
|
| ==== Video file ==== | | function enable_filter() { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
| | var checkbox = document.getElementById('en_inferencefilter'); |
| * You will need a video file from one of TinyYOLO classes
| | if(checkbox.checked == true) { |
| * Pipeline
| | document.getElementById("filter_class_id").disabled=false; |
| <syntaxhighlight lang=bash>
| | filter = " inferencefilter"; |
| VIDEO_FILE='cat.mp4'
| | } else { |
| MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
| | document.getElementById("filter_class_id").disabled=true; |
| INPUT_LAYER='input/Placeholder'
| | filter = ""; |
| OUTPUT_LAYER='add_8'
| | document.getElementById("filter_class_id").value = null; |
| LABELS=labels.txt
| | } |
| | } |
|
| |
|
| GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
| | function enable_crop() { |
| filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| | var checkbox = document.getElementById('en_inferencecrop'); |
| </syntaxhighlight>
| | if(checkbox.checked == true) { |
| * Output
| | document.getElementById("aspect_ratio").disabled=false; |
| <syntaxhighlight lang=bash>
| | crop = " inferencecrop"; |
| 0:00:02.992422192 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
| | } else { |
| 0:00:03.048734915 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
| | document.getElementById("aspect_ratio").disabled=true; |
| 0:00:03.048770315 8194 0x55b131f7aad0 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
| | crop = ""; |
| 0:00:03.048776786 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
| | document.getElementById("aspect_ratio").value = null; |
| 0:00:03.048784401 8194 0x55b131f7aad0 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
| | } |
| 0:00:03.048805819 8194 0x55b131f7aad0 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
| |
| {
| |
| id : 338, | |
| enabled : True,
| |
| bbox : {
| |
| x : 0 | |
| y : 0 | |
| width : 416 | |
| height : 416
| |
| },
| |
| classes : [
| |
|
| |
| ],
| |
| predictions : [ | |
| {
| |
| id : 339,
| |
| enabled : True,
| |
| bbox : {
| |
| x : 87
| |
| y : 102
| |
| width : 244
| |
| height : 285
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 252
| |
| Class : 7
| |
| Label : cat
| |
| Probability : 16.271288
| |
| Classes : 20
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| },
| |
| ]
| |
| } | | } |
|
| |
|
| </syntaxhighlight>
| | function print() { |
| | model = document.getElementById("model").value; |
| | src = document.getElementById("source").value; |
| | sink = document.getElementById("sink").value; |
| | |
| | if(model != "") { |
| | model_props = " name=net model-location=" + document.getElementById("model_location").value +" backend=" + document.getElementById("backend").value + " backend::input-layer=" + document.getElementById("inputlayer").value + " labels=\"$(cat " +document.getElementById("labels").value + ")\" backend::output-layer=" + document.getElementById("outputlayer").value; |
| | tee = " tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! net.sink_bypass net.src_model !"; |
| | } |
|
| |
|
| ==== Camera stream ==== | | switch(src) { |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
| | case " multifilesrc": |
| * You will need a v4l2 compatible camera
| | src = src + " location=" + document.getElementById("source_location").value + " start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue !"; |
| * Pipeline
| | break; |
| <syntaxhighlight lang=bash>
| | case " filesrc": |
| CAMERA='/dev/video0'
| | src = src + " location=" + document.getElementById("source_location").value + " ! decodebin ! videoconvert ! videoscale ! queue !"; |
| MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
| | break; |
| INPUT_LAYER='input/Placeholder'
| | case " v4l2src": |
| OUTPUT_LAYER='add_8'
| | src = src + " device=" + document.getElementById("source_location").value + " ! videoconvert ! videoscale ! queue !"; |
| LABELS=labels.txt
| | break; |
| | default: |
| | src = ""; |
| | } |
|
| |
|
| GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
| | if(overlay != "") { |
| v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
| | overlay = " inferenceoverlay"; |
| </syntaxhighlight>
| | var thickness= document.getElementById("thickness").value; |
| * Output
| | var fontscale=document.getElementById("fontscale").value; |
| <syntaxhighlight lang=bash>
| | var style=document.getElementById("style").value; |
| 0:00:02.493931842 8814 0x557dfec450f0 LOG tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
| | |
| 0:00:02.541405794 8814 0x557dfec450f0 LOG tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
| | if(thickness != "") { |
| 0:00:02.541440570 8814 0x557dfec450f0 LOG tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:82.788036, y:126.779761, width:250.107193, height:300.441625, prob:12.457702]
| | overlay = overlay + " thickness=" + thickness; |
| 0:00:02.541447102 8814 0x557dfec450f0 LOG tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
| | } |
| 0:00:02.541454350 8814 0x557dfec450f0 LOG tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
| | if(fontscale != "") { |
| 0:00:02.541476722 8814 0x557dfec450f0 LOG tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
| | overlay = overlay + " fontscale=" + fontscale; |
| { | | } |
| id : 177,
| | if(style != "") { |
| enabled : True,
| | overlay = overlay + " style=" + fontscale; |
| bbox : {
| | } |
| x : 0 | | overlay = overlay + " !"; |
| y : 0 | | } |
| width : 416
| | |
| height : 416 | | if(filter != "") { |
| },
| | filter = " inferencefilter filter-class=" + document.getElementById("filter_class_id").value + " ! "; |
| classes : [
| | } |
|
| | |
| ],
| | if(crop != "") { |
| predictions : [
| | crop = " inferencecrop aspect-ratio=" + document.getElementById("aspect_ratio").value + " ! "; |
| { | | } |
| id : 178, | | document.getElementById("new_pipeline").value = "gst-launch-1.0 " + model + model_props + src + tee + filter + crop + overlay + sink; |
| enabled : True,
| |
| bbox : {
| |
| x : 82
| |
| y : 126
| |
| width : 250
| |
| height : 300
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 101
| |
| Class : 14
| |
| Label : person
| |
| Probability : 12.457702
| |
| Classes : 20
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| },
| |
| ] | |
| } | | } |
| </syntaxhighlight>
| |
|
| |
| ==== Visualization with inference overlay ====
| |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
| |
| * You will need a v4l2 compatible camera
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
| CAMERA='/dev/video0'
| |
| MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
| |
| INPUT_LAYER='input/Placeholder'
| |
| OUTPUT_LAYER='add_8'
| |
| LABELS='labels.txt'
| |
|
| |
| GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
| |
| v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
| |
| </syntaxhighlight>
| |
|
| |
|
| ==== Using inference filter ==== | | </script> |
| * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
| | <textarea id="new_pipeline" name="Text1" cols="140" rows="5"></textarea> |
| * You will need a v4l2 compatible camera
| |
| * Pipeline
| |
| <syntaxhighlight lang=bash>
| |
|
| |
|
| GST_DEBUG=2,*inferencedebug*:6 gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! tee name=t t. ! videoscale ! queue ! net.sink_model t. ! queue ! net.sink_bypass tinyyolov2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_model ! inferencefilter filter-class=8 ! inferencedebug ! fakesink
| | </body> |
| </syntaxhighlight> | | </html> |
|
| |
|
| * Output
| | = Advanced pipelines = |
| <syntaxhighlight lang=bash>
| |
|
| |
|
| 0:00:03.255231109 11277 0x55f5ce5cfde0 DEBUG inferencedebug gstinferencedebug.c:131:gst_inference_debug_transform_ip:<inferencedebug0> transform_ip
| | <noinclude> |
| 0:00:03.255268289 11277 0x55f5ce5cfde0 DEBUG inferencedebug gstinferencedebug.c:120:gst_inference_debug_print_predictions:<inferencedebug0> Prediction Tree:
| | {{GstInference/Foot|Example pipelines|Example pipelines/NANO}} |
| {
| | </noinclude> |
| id : 169,
| |
| enabled : False,
| |
| bbox : {
| |
| x : 0
| |
| y : 0
| |
| width : 416
| |
| height : 416
| |
| },
| |
| classes : [
| |
|
| |
| ],
| |
| predictions : [
| |
| {
| |
| id : 170,
| |
| enabled : False,
| |
| bbox : {
| |
| x : 101
| |
| y : 96
| |
| width : 274
| |
| height : 346
| |
| },
| |
| classes : [
| |
| {
| |
| Id : 81
| |
| Class : 14
| |
| Label : person
| |
| Probability : 12.842868
| |
| Classes : 20
| |
| },
| |
| ],
| |
| predictions : [
| |
|
| |
| ]
| |
| },
| |
| ]
| |
| } | |
| </syntaxhighlight> | |