GstInference/Example pipelines with hierarchical metadata/PC: Difference between revisions

From RidgeRun Developer Wiki
Line 22: Line 22:


[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
<html>
<html>
<head>
<head>
Line 212: Line 211:
           <option value="" disabled selected>Select your sink</option>
           <option value="" disabled selected>Select your sink</option>
           <option value=" fakesink" >fakesink</option>  
           <option value=" fakesink" >fakesink</option>  
           <option value=" xvimagesink">xvimagesink</option>
           <option value=" videoconvert ! xvimagesink">xvimagesink</option>
         </select>
         </select>
       </div>
       </div>
Line 324: Line 323:
};
};


function model_selection() {
function backend_selection() {
 
   // Default values from dictionary for Tensorflow models
   // Default values from dictionary for Tensorflow models
   if( document.getElementById("backend").value == "tensorflow") {
   if( document.getElementById("backend").value == "tensorflow") {
     document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
     if( document.getElementById("model").value != "") {
    document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
      document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
      document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
    }
     document.getElementById("inputlayer").disabled=false;
     document.getElementById("inputlayer").disabled=false;
     document.getElementById("outputlayer").disabled=false;
     document.getElementById("outputlayer").disabled=false;
Line 338: Line 338:
     document.getElementById("outputlayer").disabled=true;
     document.getElementById("outputlayer").disabled=true;
     document.getElementById("outputlayer").value=null;
     document.getElementById("outputlayer").value=null;
  }
}
function model_selection() {
  if( document.getElementById("backend").value == "tensorflow") {
    document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
    document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
   }
   }
   document.getElementById("model_location").value = model_names[document.getElementById("model").value];
   document.getElementById("model_location").value = model_names[document.getElementById("model").value];
Line 426: Line 433:
     }
     }
     if(style != "") {
     if(style != "") {
       overlay = overlay + " style=" + fontscale;
       overlay = overlay + " style=" + style;
     }
     }
     overlay = overlay + " !";
     overlay = overlay + " !";

Revision as of 21:45, 9 March 2020




Previous: Example pipelines Index Next: Example pipelines/NANO




Sample pipelines

The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.

The required elements are:

  • Backend
  • Model
  • Model location
  • Labels
  • Source
  • Sink

The optional elements include:

  • inferencefilter
  • inferencrop
  • inferenceoverlay
Detection with new metadata

Pipeline generator

The following tool will provide simple pipelines according to the selected elements.

Optional utilites

The following elements are optional yet very useful. Check the documentation for more details on their properties.

Advanced pipelines

Previous: Example pipelines Index Next: Example pipelines/NANO