GstInference/Example pipelines with hierarchical metadata/PC: Difference between revisions

From RidgeRun Developer Wiki
No edit summary
 
(22 intermediate revisions by the same user not shown)
Line 1: Line 1:
<noinclude>
{{GstInference/Head|previous=Example pipelines|next=Example pipelines/NANO|title=GstInference GStreamer pipelines on PC}}
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
= Sample pipelines =
The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.


== Tensorflow ==
The required elements are:
* Backend
* Model
* Model location
* Labels
* Source
* Sink


=== Inceptionv1 ===
The optional elements include:
* inferencefilter
* inferencrop
* inferenceoverlay 


==== Image file ====
[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
<html>
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
<head>
* Pipeline
<meta name="viewport" content="width=device-width, initial-scale=1">
<syntaxhighlight lang=bash>
<style>
IMAGE_FILE=cat.jpg
* {
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
  box-sizing: border-box;
INPUT_LAYER='input'
}
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>


0:00:00.626529976  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
.button {
0:00:00.643145025  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
  background-color: #008CBA;
0:00:00.643180120  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3804 : (4.191162)
  border: none;
0:00:00.643186095  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
  color: white;
0:00:00.643211153  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
  padding: 15px 32px;
{
   text-align: center;
   id : 7,
   text-decoration: none;
   enabled : True,
   display: inline-block;
   bbox : {
  font-size: 16px;
    x : 0
  margin: 4px 2px;
    y : 0
  cursor: pointer;
    width : 224
}
    height : 224
 
  },
input[type=text], select, textarea {
  classes : [
  width: 100%;
    {
  padding: 12px;
      Id : 14
  border: 1px solid #ccc;
      Class : 3804
  border-radius: 4px;
      Label : (null)
   resize: vertical;
      Probability : 4.191162
      Classes : 4004
    },
   ],
  predictions : [
   
  ]
}
}


</syntaxhighlight>
label {
  padding: 12px 12px 12px 0;
  display: inline-block;
}


==== Video file ====
input[type=submit] {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
  background-color: #4CAF50;
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
  color: white;
* Pipeline
  padding: 12px 20px;
<syntaxhighlight lang=bash>
  border: none;
VIDEO_FILE='cat.mp4'
  border-radius: 4px;
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
  cursor: pointer;
INPUT_LAYER='input'
  float: right;
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
}
LABELS='imagenet_labels.txt'


GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
input[type=submit]:hover {
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
   background-color: #45a049;
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:00.881389256  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
0:00:00.898481750  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
0:00:00.898515118  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 1016 : (4.182041)
0:00:00.898521200  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
0:00:00.898546079  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
{
   id : 22,
  enabled : True,
  bbox : {
    x : 0
    y : 0
    width : 224
    height : 224
  },
  classes : [
    {
      Id : 44
      Class : 1016
      Label : (null)
      Probability : 4.182041
      Classes : 4004
    },
  ],
  predictions : [
   
  ]
}
}
</syntaxhighlight>


==== Camera stream ====
.container {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
  border-radius: 5px;
* You will need a v4l2 compatible camera
  background-color: lightcyan;
* Pipeline
  padding: 20px;
<syntaxhighlight lang=bash>
}
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'


GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
.col-25 {
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
   float: left;
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
   width: 25%;
</syntaxhighlight>
   margin-top: 6px;
* Output
<syntaxhighlight lang=bash>
0:00:03.858432794  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
0:00:03.875012119  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
0:00:03.875053519  6899 0x558a68bf0e80 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3022 : (9897291000005358165649701398904832.000000)
0:00:03.875061545  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
0:00:03.875089371  6899 0x558a68bf0e80 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
{
   id : 93,
  enabled : True,
   bbox : {
    x : 0
    y : 0
    width : 224
    height : 224
   },
  classes : [
    {
      Id : 186
      Class : 3022
      Label : (null)
      Probability : 9897291000005358165649701398904832.000000
      Classes : 4004
    },
  ],
  predictions : [
   
  ]
}
}
</syntaxhighlight>


==== Visualization with inference overlay ====
.col-50 {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
  float: left;
* You will need a v4l2 compatible camera
  width: 50%;
* Pipeline
  margin-top: 6px;
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
}
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
</syntaxhighlight>


=== Inceptionv2 ===
.col-75 {
  float: left;
  width: 75%;
  margin-top: 6px;
}


==== Image file ====
/* Clear floats after the columns */
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
.row:after {
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
  content: "";
* Pipeline
  display: table;
<syntaxhighlight lang=bash>
  clear: both;
IMAGE_FILE=cat.jpg
}
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'


GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
/* Responsive layout - when the screen is less than 600px wide, make the two columns stack on top of each other instead of next to each other */
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
@media screen and (max-width: 600px) {
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
  .col-25, .col-50, .col-75, input[type=submit] {
</syntaxhighlight>
     width: 100%;
* Output
     margin-top: 0;
<syntaxhighlight lang=bash>
   }
0:00:01.167111306 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
0:00:01.190633209 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
0:00:01.190667056 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
0:00:01.190673102 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
0:00:01.190699590 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
{
  id : 23,
  enabled : True,
  bbox : {
     x : 0
     y : 0
    width : 224
    height : 224
   },
  classes : [
    {
      Id : 46
      Class : 2058
      Label : (null)
      Probability : 33799702613643740784668592694586507264.000000
      Classes : 4004
    },
  ],
  predictions : [
   
  ]
}
}


</syntaxhighlight>
// Material Select Initialization
$(document).ready(function() {
$('.mdb-select').materialSelect();
});
 
</style>
</head>
<body>
 
<h2>Pipeline generator</h2>
<p>The following tool will provide simple pipelines according to the selected elements.</p>
 
<div class="container">
  <form action="/action_page.php" id="gen_form">
    <div class="row">
      <div class="col-25">
        <label for="platform">Platform</label>
      </div>
      <div class="col-75">
        <select id="platform" name="platform" onchange="dynamic_platform_dropdown(this.options[this.selectedIndex].value)">
          <option value="" disabled selected>Select your platform</option>
          <option value="pc">PC</option>
          <option value="jetson">Jetson</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="backend">Backend</label>
      </div>
      <div class="col-75">
        <select id="backend" name="backend" onchange="backend_selection(this.options[this.selectedIndex].value)">
          <option value="" disabled selected>Select your backend</option>
          <option value="tensorflow">TensorFlow</option>
          <option value="tflite">TFLite</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="model_menu">Model</label>
      </div>
      <div class="col-75">
        <select id="model" name="model" onchange="model_selection()">
          <option value="" disabled selected>Select your architecture</option>
          <option value="inceptionv1">Inceptionv1</option>
          <option value="inceptionv2">Inceptionv2</option>
          <option value="inceptionv3">Inceptionv3</option>
          <option value="inceptionv4">Inceptionv4</option>
          <option value="mobilenetv2">MobileNetv2</option>
          <option value="facenetv1">FaceNet</option>
          <option value="tinyyolov2">TinyYolov2</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="input_layer">Input layer</label>
      </div>
      <div class="col-75">
        <input type="text" id="inputlayer" name="inputlayer" placeholder="Input layer..">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="output_layer">Output layer</label>
      </div>
      <div class="col-75">
        <input type="text" id="outputlayer" name="outputlayer" placeholder="Output layer..">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="output_layer">Model location</label>
      </div>
      <div class="col-75">
        <input type="text" id="model_location" name="model_location" placeholder="Path to model..">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="output_layer">Labels</label>
      </div>
      <div class="col-75">
        <input type="text" id="labels" placeholder="Path to labels file..">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="input_source">Source</label>
      </div>
      <div class="col-25">
        <select id="source" name="source">
          <option value="" disabled selected>Select your source</option>
          <option value=" multifilesrc">Image file</option>
          <option value=" filesrc">Video file</option>
          <option value=" v4l2src">Camera stream (v4l2)</option>
        </select>
      </div>
      <div class="col-50">
        <input type="text" id="source_location" placeholder="Source location">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <label for="sink">Sink</label>
      </div>
      <div class="col-75">
        <select id="sink">
          <option value="" disabled selected>Select your sink</option>
          <option value=" fakesink" >fakesink</option>
          <option value=" videoconvert ! xvimagesink sync=false async=false qos=false">xvimagesink</option>
        </select>
      </div>
    </div>
  <h3>Optional utilites</h3>
  <p>The following elements are optional yet very useful. Check the documentation for more details on their properties.</p>
    <div class="row">
      <div class="col-25">
        <input type="checkbox" id="en_inferencefilter" onchange="enable_filter()">
        <label for="en_inferencefilter">Inference Filter</label>
      </div>
      <div class="col-75">
        <input type="text" id="filter_class_id" placeholder="Filter class id.."  disabled="true">
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <input type="checkbox" id="en_inference_overlay" onchange="enable_overlay()">
        <label for="en_inference_overlay">Inference Overlay</label>
      </div>
      <div class="col-25">
        <input type="text" id="thickness" placeholder="Thickness" disabled="true">
      </div>
      <div class="col-25">
        <input type="text" id="fontscale" placeholder="Fontscale" disabled="true">
      </div>
      <div class="col-25">
        <select id="style" name="style" disabled=true>
          <option value="" disabled selected>Pick a style</option>
          <option value="0">Classic</option>
          <option value="1">Dotted</option>
          <option value="2">Dashed</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-25">
        <input type="checkbox" id="en_inferencecrop" onchange="enable_crop()">
        <label for="en_inferencecrop">Inference Crop</label>
      </div>
      <div class="col-25">
        <input type="text" id="aspect_ratio" placeholder="Aspect ratio" disabled="true">
      </div>
    </div>
    <div class="row">
    <button type="button" class="button" onclick="reset_all()">Reset</button>
    <button type="button" class="button" onclick="print()">Generate!</button>
    </div>
  </form>
</div>


==== Video file ====
<!-- **********************  Variables and dicts ******************************** -->
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
* Pipeline
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'


GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
<script>
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
var src = "";
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
var model = "";
</syntaxhighlight>
var model_props = "";
* Output
var tee = "";
<syntaxhighlight lang=bash>
var filter = "";
0:00:01.167111306 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
var crop = "";
0:00:01.190633209 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
var overlay = "";
0:00:01.190667056 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
var sink = "";
0:00:01.190673102 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
 
0:00:01.190699590 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
var input_layers = {
{
  inceptionv1: "input",
   id : 23,
  inceptionv2: "input",
   enabled : True,
  inceptionv3: "input",
   bbox : {
  inceptionv4: "input",
    x : 0
  mobilenetv2: "input",
    y : 0
  resnet50v1: "input_tensor",
    width : 224
   tinyyolov2: "input/Placeholder",
    height : 224
   tinyyolov3: "inputs",
   },
   facenetv1: "input"
   classes : [
};
    {
 
      Id : 46
 
      Class : 2058
var output_layers = {
      Label : (null)
   inceptionv1: "InceptionV1/Logits/Predictions/Reshape_1",
      Probability : 33799702613643740784668592694586507264.000000
   inceptionv2: "Softmax",
      Classes : 4004
  inceptionv3: "InceptionV3/Predictions/Reshape_1",
    },  
  inceptionv4: "InceptionV4/Logits/Predictions",
   ],
  mobilenetv2: "MobilenetV2/Predictions/Reshape_1",
   predictions : [
  resnet50v1: "softmax_tensor",
   
  tinyyolov2: "add_8",
  ]
   tinyyolov3: "output_boxes",
}
   facenetv1: "output"
};


</syntaxhighlight>
var model_names = {
  inceptionv1: "graph_inceptionv1_tensorflow.pb",
  inceptionv2: "graph_inceptionv2_tensorflow.pb",
  inceptionv3: "graph_inceptionv3_tensorflow.pb",
  inceptionv4: "graph_inceptionv4_tensorflow.pb",
  mobilenetv2: "graph_mobilenetv2_tensorflow.pb",
  resnet50v1: "graph_resnetv1_tensorflow.pb",
  tinyyolov2: "graph_tinyyolov2_tensorflow.pb",
  tinyyolov3: "graph_tinyyolov3_tensorflow.pb",
  facenetv1: "graph_facenetv1_tensorflow.pb"
};


==== Camera stream ====
var label_files = {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
  inceptionv1: "imagenet_labels.txt",
* You will need a v4l2 compatible camera
  inceptionv2: "imagenet_labels.txt",
* Pipeline
  inceptionv3: "imagenet_labels.txt",
<syntaxhighlight lang=bash>
  inceptionv4: "imagenet_labels.txt",
CAMERA='/dev/video0'
  mobilenetv2: "imagenet_labels.txt",
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
  resnet50v1: "imagenet_labels.txt",
INPUT_LAYER='input'
  tinyyolov2: "labels.txt",
OUTPUT_LAYER='Softmax'
  tinyyolov3: "labels_ty3.txt",
LABELS='imagenet_labels.txt'
  facenetv1: "imagenet_labels.txt"
};


GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
/**********************************************************************
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
* General
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
***********************************************************************/
</syntaxhighlight>
function reset_all() {
* Output
  document.getElementById("gen_form").reset();
<syntaxhighlight lang=bash>
  backend = "";
0:00:01.647715258 12963 0x55be7ee48a80 LOG              inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
  src = "";
0:00:01.673402231 12963 0x55be7ee48a80 LOG              inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
   model = "";
0:00:01.673436695 12963 0x55be7ee48a80 LOG              inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3364 : (3.972995)
   model_props = "";
0:00:01.673445162 12963 0x55be7ee48a80 LOG              inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
   tee = "";
0:00:01.673476625 12963 0x55be7ee48a80 LOG              inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
   filter = "";
{
   crop = "";
   id : 26,
   overlay = "";
   enabled : True,
   sink = "";
   bbox : {
   disable_element("filter_class_id");
    x : 0
    y : 0
    width : 224
    height : 224
   },
   classes : [
    {
      Id : 52
      Class : 3364
      Label : (null)
      Probability : 3.972995
      Classes : 4004
    },
   ],
   predictions : [
   
   ]
}
}
</syntaxhighlight>


==== Visualization with inference overlay ====
function disable_element(element) {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
  document.getElementById(element).disabled=true;
* You will need a v4l2 compatible camera
  document.getElementById(element).value=null;
* Pipeline
}
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='Softmax'
LABELS='imagenet_labels.txt'


gst-launch-1.0 \
function enable_element(element) {
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
  document.getElementById(element).disabled=false;
t. ! videoscale ! queue ! net.sink_model \
}
t. ! queue ! net.sink_bypass \
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
</syntaxhighlight>


=== Inceptionv3 ===
function search_option(child_id, label, parent, mode) {
  var done="false";
  var selectobject = document.getElementById(parent);
  for (var i=0; i<selectobject.length; i++) {
      if (selectobject.options[i].value == child_id) {
        if(mode == "remove") {
          selectobject.remove(i);
        }
        done="true";
      }
  }
  if (done =="false") {
    if (mode == "add") {
      var new_option = document.createElement("option");
      new_option.text = label;
      new_option.value = child_id;
      document.getElementById(parent).appendChild(new_option);
    }
  }
}


==== Image file ====
/**********************************************************************
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
* Elements selection
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
***********************************************************************/
* Pipeline
function dynamic_platform_dropdown(platform) {
<syntaxhighlight lang=bash>
  reset_all();
IMAGE_FILE=cat.jpg
  document.getElementById("platform").value = platform;
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
    
INPUT_LAYER='input'
   if (platform == "pc") {
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
     search_option("ncsdk","NCSDK","backend","add");
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
     search_option(" nvarguscamerasrc","nvarguscamerasrc","source","remove");
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
     search_option(" videoconvert ! 'video/x-raw,format=I420' ! nvvidconv ! nvoverlaysink","nvoverlaysink","sink","remove");
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
   } else {
</syntaxhighlight>
     search_option("ncsdk","NCSDK","backend","remove");
* Output
     search_option(" nvarguscamerasrc","nvarguscamerasrc","source","add");
<syntaxhighlight lang=bash>
     search_option(" videoconvert ! 'video/x-raw,format=I420' ! nvvidconv ! nvoverlaysink","nvoverlaysink","sink","add");
0:00:01.696274261 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:149:gst_inceptionv3_preprocess:<net> Preprocess
   }
0:00:01.751348188 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess
0:00:01.751379427 13153 0x55c06386e8a0 LOG              inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 1353 : (9.000000)
0:00:01.751385353 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta
0:00:01.751511065 13153 0x55c06386e8a0 LOG              inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions:
{
   id : 16,
   enabled : True,
  bbox : {
     x : 0
     y : 0
     width : 299
    height : 299
   },
  classes : [
     {
      Id : 32
      Class : 1353
      Label : (null)
      Probability : 9.000000
      Classes : 4004
     },  
  ],
  predictions : [
      
   ]
}
}


</syntaxhighlight>
function backend_selection(backend) {
  var option_rn = document.createElement("option");


==== Video file ====
  switch (backend)
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
  {
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
    case "tensorflow" :
* Pipeline
      search_option("tinyyolov3","TinyYolov3","model","add");
<syntaxhighlight lang=bash>
      search_option("resnet50v1","Resnet50V1","model","add");
VIDEO_FILE='cat.mp4'
      if( document.getElementById("model").value != "") {
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
        document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
INPUT_LAYER='input'
        document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
      }
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
      enable_element("inputlayer");
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
      enable_element("outputlayer");
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
      break;
</syntaxhighlight>
     case "tflite" :
* Output
      search_option("tinyyolov3","TinyYolov3","model","add");
<syntaxhighlight lang=bash>
       search_option("resnet50v1","Resnet50V1","model","add");
0:00:01.643494169 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:149:gst_inceptionv3_preprocess:<net> Preprocess
       disable_element("inputlayer");
0:00:01.696036720 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:161:gst_inceptionv3_postprocess_old:<net> Postprocess
       disable_element("outputlayer");
0:00:01.696072019 13153 0x55c06386e8a0 LOG              inceptionv3 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2543 : (5.398693)
       break;
0:00:01.696079025 13153 0x55c06386e8a0 LOG              inceptionv3 gstinceptionv3.c:186:gst_inceptionv3_postprocess_new:<net> Postprocess Meta
    case "ncsdk":
0:00:01.696208280 13153 0x55c06386e8a0 LOG              inceptionv3 gstinferencedebug.c:111:gst_inference_print_predictions:
       search_option("tinyyolov3","TinyYolov3","model","remove");
{
      search_option("resnet50v1","Resnet50V1","model","remove");
  id : 15,
      disable_element("inputlayer");
  enabled : True,
      disable_element("outputlayer");
  bbox : {
      break;
     x : 0
   }
    y : 0
    width : 299
    height : 299
  },
  classes : [
    {
       Id : 30
       Class : 2543
       Label : (null)
       Probability : 5.398693
       Classes : 4004
    },  
  ],
  predictions : [
   
   ]
}
}


</syntaxhighlight>
function model_selection() {
  if( document.getElementById("backend").value == "tensorflow") {
    document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
    document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
  }
  document.getElementById("model_location").value = model_names[document.getElementById("model").value];
  document.getElementById("labels").value = label_files[document.getElementById("model").value];
}


==== Camera stream ====
function crop_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
  crop = " inferencecrop aspect-ratio=" + document.getElementById("aspect_ratio").value + " ! ";
* You will need a v4l2 compatible camera
}
* Pipeline
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,105199)
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,093981)
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,077824)
</syntaxhighlight>


==== Visualization with inference overlay ====
function model_props_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
  if (model != "") {
* You will need a v4l2 compatible camera
    model_props = " name=net model-location=" + document.getElementById("model_location").value + " backend=" + document.getElementById("backend").value + " labels=\"$(cat " +document.getElementById("labels").value + ")\"";
* Pipeline
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
</syntaxhighlight>


=== TinyYolov2 ===
    if (backend == "tensorflow") {
      model_props = model_props + " backend::input-layer=" + document.getElementById("inputlayer").value + " backend::output-layer=" + document.getElementById("outputlayer").value;
    }
  } else {
    model_props = "";
  }
}


==== Image file ====
// TODO: modify for platform
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
function tee_selection() {
* You will need an image file from one of TinyYOLO classes
  if (model != "") {
* Pipeline
    switch (platform) {
CAMERA='/dev/video0'
      case "jetson":
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
        tee = " tee name=t t. ! nvvidconv !  queue ! net.sink_model t. ! nvvidconv !  video/x-raw,format=RGBA ! queue ! net.sink_bypass net.src_model";
INPUT_LAYER='input/Placeholder'
      break;
OUTPUT_LAYER='add_8'
      case "pc":
LABELS=labels.txt
        tee = " tee name=t t. ! queue ! videoconvert ! videoscale !  net.sink_model t. ! queue ! videoconvert ! net.sink_bypass  net.src_model !";
      break;
      default:
        tee = "";
      break;
    }
  } else {
    tee = "";
  }
}


GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
function jetson_src_selection() {
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
  switch(src) {
</syntaxhighlight>
    case " multifilesrc":
* Output
      src = src + " location=" + document.getElementById("source_location").value + " start-index=0 stop-index=0 loop=true ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue !";
<syntaxhighlight lang=bash>
    break;
0:00:03.050336570  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
    case " filesrc":
0:00:03.097045162  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
      src = src + " location=" + document.getElementById("source_location").value + " qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue !";
0:00:03.097080665  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
    break;
0:00:03.097087457  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
    case " v4l2src":
0:00:03.097095173  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
      src = src + " device=" + document.getElementById("source_location").value + " ! nvvidconv ! queue !";
0:00:03.097117947  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
    break;
{
    case " nvarguscamerasrc":
   id : 346,
      src = src + " sensor-id=0 ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvvidconv ! queue !";
  enabled : True,
    break;
   bbox : {
    default:
     x : 0
      src = "";
    y : 0
    break;
     width : 416
   }
     height : 416
}
  },
 
  classes : [
function pc_src_selection() {
      
   switch(src) {
  ],
     case " multifilesrc":
  predictions : [
      src = src + " location=" + document.getElementById("source_location").value + " start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue !";
     {
     break;
      id : 347,
     case " filesrc":
       enabled : True,
      src = src + " location=" + document.getElementById("source_location").value + " ! decodebin ! videoconvert ! videoscale ! queue !";
      bbox : {
     break;
        x : 87
     case " v4l2src":
        y : 102
       src = src + " device=" + document.getElementById("source_location").value + " ! videoconvert ! videoscale ! queue !";
        width : 244
    break;
        height : 285
    default:
      },
      src = "";
      classes : [
    break;
        {
   }
          Id : 258
          Class : 7
          Label : cat
          Probability : 16.271288
          Classes : 20
        },
      ],
      predictions : [
       
      ]
    },
   ]
}
}
</syntaxhighlight>


==== Video file ====
function src_props_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
  switch(platform) {
* You will need a video file from one of TinyYOLO classes
    case "pc":
* Pipeline
      pc_src_selection();
<syntaxhighlight lang=bash>
    break;
VIDEO_FILE='cat.mp4'
    case "jetson":
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
      jetson_src_selection();
INPUT_LAYER='input/Placeholder'
    break;
OUTPUT_LAYER='add_8'
  }
LABELS=labels.txt


GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
0:00:02.992422192  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
0:00:03.048734915  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
0:00:03.048770315  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
0:00:03.048776786  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
0:00:03.048784401  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
0:00:03.048805819  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
{
  id : 338,
  enabled : True,
  bbox : {
    x : 0
    y : 0
    width : 416
    height : 416
  },
  classes : [
   
  ],
  predictions : [
    {
      id : 339,
      enabled : True,
      bbox : {
        x : 87
        y : 102
        width : 244
        height : 285
      },
      classes : [
        {
          Id : 252
          Class : 7
          Label : cat
          Probability : 16.271288
          Classes : 20
        },
      ],
      predictions : [
       
      ]
    },
  ]
}
}


</syntaxhighlight>
/**********************************************************************
* Optional utilities - enable
***********************************************************************/


==== Camera stream ====
function enable_overlay() {   
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
  var checkbox = document.getElementById('en_inference_overlay');
* You will need a v4l2 compatible camera
  if(checkbox.checked == true) {
* Pipeline
    enable_element("thickness");
<syntaxhighlight lang=bash>
    enable_element("fontscale");
CAMERA='/dev/video0'
    enable_element("style");
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
    overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay !";
INPUT_LAYER='input/Placeholder'
  } else {
OUTPUT_LAYER='add_8'
    disable_element("thickness");
LABELS=labels.txt
    disable_element("fontscale");
    disable_element("style");
    overlay = "";
  }
}


GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
function enable_filter() {   
v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
  var checkbox = document.getElementById('en_inferencefilter');
</syntaxhighlight>
  if(checkbox.checked == true) {
* Output
     enable_element("filter_class_id");
<syntaxhighlight lang=bash>
     filter = " inferencefilter";
0:00:02.493931842  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
   } else {
^Chandling interrupt.
    disable_element("filter_class_id");
Interrupt: Stopping pipeline ...
     filter = "";
Execution ended after 0:00:01.951234668
   }
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
0:00:02.541405794  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
0:00:02.541440570  8814 0x557dfec450f0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:82.788036, y:126.779761, width:250.107193, height:300.441625, prob:12.457702]
0:00:02.541447102  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
0:00:02.541454350  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
0:00:02.541476722  8814 0x557dfec450f0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
{
  id : 177,
  enabled : True,
  bbox : {
     x : 0
     y : 0
    width : 416
    height : 416
   },
  classes : [
      
   ],
  predictions : [
    {
      id : 178,
      enabled : True,
      bbox : {
        x : 82
        y : 126
        width : 250
        height : 300
      },
      classes : [
        {
          Id : 101
          Class : 14
          Label : person
          Probability : 12.457702
          Classes : 20
        },
      ],
      predictions : [
       
      ]
    },
  ]
}
}
</syntaxhighlight>


==== Visualization with inference overlay ====
function enable_crop() {   
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
  var checkbox = document.getElementById('en_inferencecrop');
* You will need a v4l2 compatible camera
  if(checkbox.checked == true) {
* Pipeline
    enable_element("aspect_ratio");
<syntaxhighlight lang=bash>
    crop = " inferencecrop";
CAMERA='/dev/video0'
  } else {
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
    disable_element("aspect_ratio");
INPUT_LAYER='input/Placeholder'
    crop = "";
OUTPUT_LAYER='add_8'
  }
LABELS='labels.txt'
}
 
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt)  backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
</syntaxhighlight>


==== Using inference filter ====
/**********************************************************************
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* print final pipeline
* You will need a v4l2 compatible camera
***********************************************************************/
* Pipeline
<syntaxhighlight lang=bash>


GST_DEBUG=2,*inferencedebug*:6 gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! tee name=t t. ! videoscale ! queue ! net.sink_model t. ! queue ! net.sink_bypass tinyyolov2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_model ! inferencefilter filter-class=8 ! inferencedebug ! fakesink
function print() {
<syntaxhighlight>
  platform = document.getElementById("platform").value;
  backend = document.getElementById("backend").value;
  model = document.getElementById("model").value;
  src = document.getElementById("source").value;
  sink = document.getElementById("sink").value;
 
  if(model != "") {
    model_props_selection();
  }
 
  if(src != "") {
  tee_selection();
  src_props_selection();  
  }


* Output
<syntaxhighlight lang=bash>


0:00:03.255231109 11277 0x55f5ce5cfde0 DEBUG        inferencedebug gstinferencedebug.c:131:gst_inference_debug_transform_ip:<inferencedebug0> transform_ip
  if(overlay != "") {
0:00:03.255268289 11277 0x55f5ce5cfde0 DEBUG        inferencedebug gstinferencedebug.c:120:gst_inference_debug_print_predictions:<inferencedebug0> Prediction Tree:
    overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay !";
{
    var thickness= document.getElementById("thickness").value;
   id : 169,
    var fontscale=document.getElementById("fontscale").value;
  enabled : False,
    var style=document.getElementById("style").value;
   bbox : {
 
     x : 0
    if(thickness != "") {
     y : 0
      overlay = overlay + " thickness=" + thickness;
     width : 416
    }
     height : 416
    if(fontscale != "") {
   },
      overlay = overlay + " fontscale=" + fontscale;
  classes : [
    }
    if(style != "") {
      overlay = overlay + " style=" + style;
    }
    overlay = overlay + " !";
   }
   if(filter != "") {
     filter = " inferencefilter";
     if(document.getElementById("filter_class_id").value != "") {
      filter = filter + " filter-class=" + document.getElementById("filter_class_id").value + " !";
     } else {
      filter = filter + " !";
     }
   }  
      
      
   ],
   if(crop != "") {
  predictions : [
    crop = " inferencecrop";
     {
     if(document.getElementById("aspect_ratio").value != "") {
       id : 170,
       crop = crop + " aspect-ratio=" + document.getElementById("aspect_ratio").value + " !";
      enabled : False,
    } else {
      bbox : {
       crop = crop + " !";
        x : 101
    }
        y : 96
  }  
        width : 274
   document.getElementById("new_pipeline").value = "gst-launch-1.0 " + model + model_props + src + tee + filter + crop + overlay + sink;
        height : 346
       },
      classes : [
        {
          Id : 81
          Class : 14
          Label : person
          Probability : 12.842868
          Classes : 20
        },
      ],
      predictions : [
       
      ]
    },
   ]
}
}
</syntaxhighlight>


</script>
<textarea id="new_pipeline" name="Text1" cols="140" rows="5"></textarea>


</body>
</html>


==== Visualization with detection crop ====
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need a v4l2 compatible camera
* Pipeline
===== Example with aspect-ratio property  =====
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! detectioncrop aspect-ratio=1/1 ! videoscale ! ximagesink sync=false
</syntaxhighlight>


===== Example with crop-index property  =====
= Advanced pipelines =
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! detectioncrop crop-index=1 ! videoscale ! ximagesink sync=false
</syntaxhighlight>


===== Example with crop-class property =====
<noinclude>
<syntaxhighlight lang=bash>
{{GstInference/Foot|Example pipelines|Example pipelines/NANO}}
CAMERA='/dev/video0'
</noinclude>
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! detectioncrop crop-class=4 ! videoscale ! ximagesink sync=false
</syntaxhighlight>

Latest revision as of 16:32, 13 March 2020




Previous: Example pipelines Index Next: Example pipelines/NANO




Sample pipelines

The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.

The required elements are:

  • Backend
  • Model
  • Model location
  • Labels
  • Source
  • Sink

The optional elements include:

  • inferencefilter
  • inferencrop
  • inferenceoverlay
Detection with new metadata

Pipeline generator

The following tool will provide simple pipelines according to the selected elements.

Optional utilites

The following elements are optional yet very useful. Check the documentation for more details on their properties.


Advanced pipelines

Previous: Example pipelines Index Next: Example pipelines/NANO