GstInference/Example pipelines with hierarchical metadata: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
mNo edit summary
 
(31 intermediate revisions by 5 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{GstInference/Head|previous=Example pipelines/IMX8|next=Example Applications|title=GstInference GStreamer example pipelines with hierarchical metadata}}
{{GstInference/Head|previous=Example pipelines/IMX8|next=Example Applications|metakeywords=inference metadata,hierarchical metadata|title=GstInference GStreamer example pipelines with hierarchical metadata}}
</noinclude>
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
__NOTOC__
== Sample pipelines ==
The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you can check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.
The required elements are:
* Backend
* Model
* Model location
* Labels
* Source
* Sink
The optional elements include:
* inferencefilter
* inferencrop
* inferenceoverlay 
[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
<noinclude>
{{GstInference/Head|previous=Example pipelines/IMX8|next=Example Applications|keywords=inference metadata,hierarchical metadata|title=GstInference GStreamer example pipelines with hierarchical metadata}}
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
-->
= Sample pipelines =
 
The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.  
__TOC__
 
== Sample pipelines ==
The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you can check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.  


The required elements are:
The required elements are:
Line 20: Line 53:
* inferencrop
* inferencrop
* inferenceoverlay   
* inferenceoverlay   


[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
==  Pipeline Generator ==
<br>
{{Ambox
|type=notice
|small=left
|issue=NOTE: The following tool provides unoptimized pipelines. Consider removing unnecessary elements.
|style=width:unset;
}}
<br>
<html>
<html>
<head>
<head>
Line 117: Line 161:
</head>
</head>
<body>
<body>
<h2>Pipeline generator</h2>
<p>The following tool will provide simple pipelines according to the selected elements.</p>
<p>The following tool will provide simple pipelines according to the selected elements.</p>


Line 142: Line 184:
         <select id="backend" name="backend" onchange="backend_selection(this.options[this.selectedIndex].value)">
         <select id="backend" name="backend" onchange="backend_selection(this.options[this.selectedIndex].value)">
           <option value="" disabled selected>Select your backend</option>
           <option value="" disabled selected>Select your backend</option>
          <option value="edgetpu">EdgeTPU</option>
          <option value="tflite">TFLite</option>
           <option value="tensorflow">TensorFlow</option>
           <option value="tensorflow">TensorFlow</option>
           <option value="tflite">TFLite</option>
           <option value="tensorrt">TensorRT</option>
          <option value="onnxrt">ONNXRT</option>
          <option value="onnxrt_acl">ONNXRT ACL</option>
          <option value="onnxrt_openvino">ONNXRT OpenVINO</option>
         </select>
         </select>
       </div>
       </div>
Line 306: Line 353:
   tinyyolov3: "output_boxes",
   tinyyolov3: "output_boxes",
   facenetv1: "output"
   facenetv1: "output"
};
var model_names = {
  inceptionv1: "graph_inceptionv1_tensorflow.pb",
  inceptionv2: "graph_inceptionv2_tensorflow.pb",
  inceptionv3: "graph_inceptionv3_tensorflow.pb",
  inceptionv4: "graph_inceptionv4_tensorflow.pb",
  mobilenetv2: "graph_mobilenetv2_tensorflow.pb",
  resnet50v1: "graph_resnetv1_tensorflow.pb",
  tinyyolov2: "graph_tinyyolov2_tensorflow.pb",
  tinyyolov3: "graph_tinyyolov3_tensorflow.pb",
  facenetv1: "graph_facenetv1_tensorflow.pb"
};
};


Line 411: Line 446:
       enable_element("inputlayer");
       enable_element("inputlayer");
       enable_element("outputlayer");
       enable_element("outputlayer");
      break;
    case "onnxrt" :
      search_option("tinyyolov3","TinyYolov3","model","add");
      search_option("facenetv1","FaceNet","model","remove");
      disable_element("inputlayer");
      disable_element("outputlayer");
       break;
       break;
     case "tflite" :
     case "tflite" :
Line 424: Line 465:
       disable_element("outputlayer");
       disable_element("outputlayer");
       break;
       break;
  }
  if (document.getElementById("model").value != "") {
    model_selection();
   }
   }
}
}


function model_selection() {
function model_selection() {
   if( document.getElementById("backend").value == "tensorflow") {
   document.getElementById("labels").value = label_files[document.getElementById("model").value];
     document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
  var tmp_model_location = "";
    document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
  var tmp_backend = document.getElementById("backend").value;
  switch (tmp_backend) {
     case "tensorflow":
      document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
      document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
      tmp_model_location = "graph_" +  document.getElementById("model").value + "_tensorflow.pb";
    break;
    case "onnxrt":
      tmp_model_location = "graph_" +  document.getElementById("model").value + ".onnx";
    break;
    case "tflite":
      tmp_model_location = "graph_" +  document.getElementById("model").value + "_tflite.pb";
    break;
    case "ncsdk":
      tmp_model_location = "graph_" +  document.getElementById("model").value + "_ncsdk";
    break;
    default:
      tmp_model_location = "";
    break;
   }
   }
   document.getElementById("model_location").value = model_names[document.getElementById("model").value];
   document.getElementById("model_location").value = tmp_model_location;
  document.getElementById("labels").value = label_files[document.getElementById("model").value];
}
}


Line 452: Line 513:
}
}


// TODO: modify for platform
function tee_selection() {
function tee_selection() {
   if (model != "") {
   if (model != "") {
Line 583: Line 643:


   if(overlay != "") {
   if(overlay != "") {
     overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay !";
     overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay ";
     var thickness= document.getElementById("thickness").value;
     var thickness= document.getElementById("thickness").value;
     var fontscale=document.getElementById("fontscale").value;
     var fontscale=document.getElementById("fontscale").value;
Line 592: Line 652:
     }
     }
     if(fontscale != "") {
     if(fontscale != "") {
       overlay = overlay + " fontscale=" + fontscale;
       overlay = overlay + " font-scale=" + fontscale;
     }
     }
     if(style != "") {
     if(style != "") {
Line 625: Line 685:
</body>
</body>
</html>
</html>
<br>


== Advanced pipelines ==


= Advanced pipelines =
<br>


<noinclude>
<noinclude>
{{GstInference/Foot|Example pipelines/IMX8|Example Applications}}
{{GstInference/Foot|Example pipelines/IMX8|Example Applications}}
</noinclude>
</noinclude>

Latest revision as of 19:38, 27 February 2023




Previous: Example pipelines/IMX8 Index Next: Example Applications






Sample pipelines

The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you can check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.

The required elements are:

  • Backend
  • Model
  • Model location
  • Labels
  • Source
  • Sink

The optional elements include:

  • inferencefilter
  • inferencrop
  • inferenceoverlay


Detection with new metadata





Previous: Example pipelines/IMX8 Index Next: Example Applications





Sample pipelines

The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you can check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.

The required elements are:

  • Backend
  • Model
  • Model location
  • Labels
  • Source
  • Sink

The optional elements include:

  • inferencefilter
  • inferencrop
  • inferenceoverlay


Detection with new metadata

Pipeline Generator



The following tool will provide simple pipelines according to the selected elements.

Optional utilites

The following elements are optional yet very useful. Check the documentation for more details on their properties.


Advanced pipelines



Previous: Example pipelines/IMX8 Index Next: Example Applications