Jump to content

Image Stitching for NVIDIA Jetson/User Guide/Gstreamer: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 18: Line 18:


*'''homography-list'''
*'''homography-list'''
List of homographies as a JSON formatted string without spaces or newlines. The homography list can be store in a JSON file and used in the pipeline with the following format:
List of homographies as a JSON formatted string without spaces or newlines. The homography list can be store in a JSON file and used in the pipeline with the following format:


Line 27: Line 26:
The [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Gstreamer#JSON_file| JSON file section]] provides a detailed explanation of the JSON file format for the homography list.
The [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Gstreamer#JSON_file| JSON file section]] provides a detailed explanation of the JSON file format for the homography list.


*'''stitcher.sink'''
*'''pads'''
The pads crops the image of the respective stitcher sink to crop the overlap area. Use ''right'' to crop the image that overlaps from the right and ''left'' for the image that overlaps from the left. The pads are used in the pipelines with the following format:
 
<syntaxhighlight lang=bash>
sink_0::right=<crop-size> sink_1::right=<crop-size>
</syntaxhighlight>
 
*'''sink'''
The sink marks the end of each camera capture pipeline and maps each of the cameras to the respective image index of the homography list.


The indices of the stitcher's sinks (sink_0, for example) are mapped directly to the image index we use in the homography list.
== Pipeline Basic Example ==


*'''camera-matrix'''
The pipelines construction examples assume that the homographies matrices are stored in the <code> homographies.json</code> file and contains 1 homography for 2 images.


Definition of the camera matrix (only if undistort is necessary). The format is the following:
=== Case: 2 Cameras ===
 
To perform and display image stitching from two camera sources the pipeline should look like:


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
"{"fx": double, "fy": double,  "cx": double, "cy": double}"
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! nvvidconv ! nvoverlaysink
</syntaxhighlight>
</syntaxhighlight>


*'''distortion-model'''
=== Case: 2 Video Files ===


Distortion model to use (only if undistort is necessary). The options are: (0)brown-conrady or (1)fisheye.
To perform and display image stitching from two video sources the pipeline should look like:
 
*'''distortion-parameters'''
 
Definition of the distortion matrix (only if undistort is necessary). The format is the following:


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
"{"k1": double, "k2": double,  "p1": double, "p2": double, "k3": double, "k4": double, "k5": double, "k6": double}"
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=video_0.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  filesrc location=video_1.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink
</syntaxhighlight>
</syntaxhighlight>


Line 136: Line 149:
# '''Image indices from 0 to N-1''': if you have N images, you have to use consecutive numbers from '''0''' to '''N-1''' for the target and reference indices. It means that you cannot declare something like <code>target: 6</code> if you have 6 images; the correct index for your last image is  '''5'''.
# '''Image indices from 0 to N-1''': if you have N images, you have to use consecutive numbers from '''0''' to '''N-1''' for the target and reference indices. It means that you cannot declare something like <code>target: 6</code> if you have 6 images; the correct index for your last image is  '''5'''.


== Basic Pipelines ==
The following tool will provide simple pipelines according to the selected elements.


The generated pipelines use our '''perf element''' to measure the performance of the stitcher. It can be downloaded from
= Projector =
[https://github.com/RidgeRun/gst-perf this repository], otherwise, the element can be removed from the pipeline without any problem. In case of performance issues, consider executing the '''/usr/bin/jetson_clocks''' binary.
 
{{Ambox
| type      = notice;
| style    = width: 80%;
| textstyle = color: black;
| text      = Watch the input resolution so that the ''nvv4l2hh264enc'' encoder can handle the stitching result. This element works with inputs no larger than 4096x4096.
}}
 
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
* {
  box-sizing: border-box;
}
.button {
  background-color: #428BCB;
  border: 1px solid #428BCB;
  color: white;
  padding: 10px 15px;
  text-align: center;
  text-decoration: none;
  display: inline-block;
  font-size: 16px;
  margin: 2px 2px;
  cursor: pointer;
  border-radius: 10px;
}
 
input[type=text], select, textarea {
  width: 100%;
  padding: 8px;
  border: 1px solid #ccc;
  background-color: WhiteSmoke;
  border-radius: 5px;
  resize: vertical;
}
 
input[type=number], select, textarea {
  width: 100%;
  padding: 8px;
  border: 1px solid #ccc;
  background-color: WhiteSmoke;
  border-radius: 5px;
  resize: vertical;
}
 
label {
  padding: 12px 12px 12px 0;
  display: inline-block;
}
 
input[type=submit] {
  background-color: #4CAF50;
  color: white;
  padding: 12px 20px;
  border: none;
  border-radius: 4px;
  cursor: pointer;
  float: right;
}
 
input[type=submit]:hover {
  background-color: #45a049;
}
 
.container {
  border-radius: 5px;
  background-color: #DDE8EC;
  border: 1px solid #ccc;
  padding: 10px;
  width: 70%;
}
 
.col-10 {
  float: left;
  width: 10%;
  margin-top: 6px;
}
 
.col-25 {
  float: left;
  width: 25%;
  margin-top: 6px;
  margin-left: 6px;
}
 
.col-50 {
  float: left;
  width: 50%;
  margin-top: 6px;
}
 
.col-75 {
  float: left;
  width: 75%;
  margin-top: 6px;
}
 
/* Clear floats after the columns */
.row:after {
  content: "";
  display: table;
  clear: both;
}
 
/* Responsive layout - when the screen is less than 600px wide, make the two columns stack on top of each other instead of next to each other */
@media screen and (max-width: 600px) {
  .col-25, .col-50, .col-75, input[type=submit] {
    width: 100%;
    margin-top: 0;
  }
}
 
// Material Select Initialization
$(document).ready(function() {
$('.mdb-select').materialSelect();
});
 
</style>
</head>
 
<body>
 
<br>
<div class="container">
  <form action="/action_page.php" id="gen_form">
    <!-- Ask for type of inputs -->
    <div class="row">
      <div class="col-10">
        <label for="input_type"><b>Inputs</b></label>
      </div>
      <div class="col-25">
        <select id="input_type" name="input_type">
          <option value="" disabled selected>Select the input type</option>
          <option value="camera"> Cameras </option>
          <option value="video"> Video files </option>
        </select>
      </div>
      <div class="col-25">
        <input type="number" min="2" max="32" id="num_inputs" name="num_inputs" placeholder="How many">
      </div>
      <div class="col-25">
        <select id="colorspace" name="colorspace">
          <option value="" disabled selected>Color space</option>
          <option value="RGBA">RGBA</option>
          <option value="GRAY8">GRAY8</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-10">
        <label for="input_type"><b></b></label>
      </div>
      <div class="col-25">
        <input type="number" min="0" id="width" name="width" placeholder="Width (ex. 1920)">
      </div>
      <div class="col-25">
        <input type="number" min="0" id="height" name="height" placeholder="Height (ex. 1080)">
      </div>
    </div>
    <!-- Ask for output type -->
    <br>
    <hr>
    <div class="row">
      <div class="col-10">
        <b>Output</b>
      </div>
      <div class="col-25">
        <input type="radio" name="output_type" value="save" onclick="enable_save()"> Save video
        <br>
        <input type="radio" name="output_type" value="show" onclick="disable_save()"> Show on screen
      </div>
      <div class="col-25">
        <input type="text" id="output_filename" name="output_filename" placeholder="Insert the output filename" disabled=true>
      </div>
      <div class="col-25">
        <select id="output_format" name="output_format" disabled=true>
          <option value="" disabled selected>Video format</option>
          <option value="mp4">mp4</option>
          <option value="ts">ts</option>
        </select>
      </div>
    </div>
  </form>
</div>
<br>
<button type="button" class="button" onclick="generatePipeline()">Generate pipe</button>
<br>
 
<!-- SCRIPT -->
<script>
 
var num_inputs  = document.getElementById("num_inputs").value
var input_type  = document.getElementById("input_type").value
var colorspace  = document.getElementById("colorspace").value
var width      = document.getElementById("width").value
var height      = document.getElementById("height").value
 
var output_type = getRadiosOption("output_type")
var output_filename = document.getElementById("output_filename").value
var output_format = document.getElementById("output_format").value
 
function generatePipeline() {
  // Refresh selected variables
  refreshUserVariables();
  // Get pipeline box element
  var pipeline = document.getElementById("pipeline");
 
  // Progressively, add the pipeline elements
  var pipeStr = "";
  pipeStr += getInputsEnvVar();
  pipeStr += getOutputEnvVar();
  pipeStr += "\ngst-launch-1.0 -e cudastitcher name=stitcher \\\n";
  pipeStr += getHomographySettings();
  pipeStr += getInputBranches();
  pipeStr += getOutputBranch();
 
  pipeline.innerHTML = pipeStr;
  pipeline.style.display = "block";
}
 
function getRadiosOption(inputName) {
  var radios = document.getElementsByName(inputName);
  for (var i = 0; i < radios.length; i++) {
    if (radios[i].checked) {
      return radios[i].value;
      break;
    }
  }
  return "";
}
 
function refreshUserVariables() {
input_type  = document.getElementById("input_type").value
num_inputs  = document.getElementById("num_inputs").value
colorspace  = document.getElementById("colorspace").value
width      = document.getElementById("width").value
height      = document.getElementById("height").value
 
output_type    = getRadiosOption("output_type")
output_filename = document.getElementById("output_filename").value
output_format  = document.getElementById("output_format").value
}
 
function getHomographyEnvVars() {
  tmp = "";
  switch (num_inputs){
    case "2":
      tmp += "HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n";
      break;
    case "3":
      tmp += "LC_HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n\nRC_HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n";
      break;
  }
  return tmp;
}
 
function getHomographyMatrix() {
  tmp = "";
  tmp += "\"{\\\n"
  tmp += "  \\\"h00\\\": 7.3851e-01, \\\"h01\\\": 1.0431e-01, \\\"h02\\\": 1.4347e+03, \\\n";
  tmp += "  \\\"h10\\\":-1.0795e-01, \\\"h11\\\": 9.8914e-01, \\\"h12\\\":-9.3916e+00, \\\n";
  tmp += "  \\\"h20\\\":-2.3449e-04, \\\"h21\\\": 3.3206e-05, \\\"h22\\\": 1.0000e+00}\"";
  return tmp;
}
 
function getOutputEnvVar() {
  if (output_type == "save"){
    return "\nOUTPUT=" + output_filename + "." + output_format + "\n";
  }
  return "";
}


function getInputsEnvVar() {
For more information on how to use the rr-projector please follow the Projector User Guide
  tmp = ""
  if (input_type == "video"){
    for (var i=0; i<num_inputs; i++){
      tmp += "\nINPUT_" + i + "=video_" + i + ".mp4";
    }
    tmp += "\n";
  }
  return tmp;
}
 
function getHomographySettings() {
  hSettings = "homography-list=\"`cat homographies.json | tr -d \"\\n\" | tr -d \" \"`\" \\\n"
 
  return hSettings;
}
 
function getInputBranches() {
  errors = "";
  if (input_type == "")
    errors += "  **PLEASE SELECT INPUT TYPE**\n";
  if (num_inputs == "")
    errors += "  **PLEASE SELECT NUMBER OF INPUTS**\n";
  if (width == "")
    errors += "  **PLEASE SELECT WIDTH**\n";
  if (height == "")
    errors += "  **PLEASE SELECT HEIGHT**\n";
  if (errors != "") return errors
 
  iQueues = "";
 
  for (var i=0; i<num_inputs; i++){
    switch(input_type){
      case "camera":
        src = "  nvarguscamerasrc maxperf=true sensor-id="+i;
        break;
      case "video":
        src = "  filesrc location=$INPUT_"+ i + " ! tsdemux ! h264parse ! nvv4l2decoder";
        break;
      default:
        return "  **ERROR: input type not understood**\n";
    }
    iQueues += src + " ! ";
    iQueues += "nvvidconv ! ";
    iQueues += getVideoCaps();
    iQueues += " ! queue ! stitcher.sink_" + i + " \\\n";
  }
  return iQueues;
}
 
function getVideoCaps() {
  tmp = "\"video/x-raw";
  if (colorspace == "RGBA"){
    tmp += "(memory:NVMM)";
  }
  tmp += ", width=" + width + ", height=" + height + ", format=" + colorspace + "\"";
  return tmp;
}
 
function getOutputBranch() {
  error = "";
  if (output_type == "") error += "  **PLEASE SELECT OUTPUT TYPE**\n";
  if (output_type == "save"){
    if (output_filename == "") error += "  **PLEASE SELECT OUTPUT FILENAME**\n";
    if (output_format == "") error += "  **PLEASE SELECT OUTPUT FORMAT**\n";
  }
  if (error != ""){
    return error;
  }
 
  switch(output_type){
    case "save":
      var mux;
      if (output_format == "mp4") mux = "mp4mux";
      else if (output_format == "ts") mux = "mpegtsmux";
      else return "**FORMAT NOT UNDERSTOOD**";
   
      return "  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! " + mux + " ! filesink location=$OUTPUT"
    case "show":
      return "  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink "
    default:
      return "DEFAULT";
  }
}
 
function enable_save() {   
  enable_element("output_filename");
  enable_element("output_format");
}
 
function disable_save() {   
  disable_element("output_filename");
  disable_element("output_format");
}
 
function enable_element(element) {
  document.getElementById(element).disabled=false;
}
 
function disable_element(element) {
  document.getElementById(element).disabled=true;
}
 
</script>
 
<!-- FINAL PIPELINE TEXT -->
<br>
<textarea id="pipeline" name="pipeline" cols="140" rows="20"></textarea>
 
</body>
</html>
 
 
= Projector =


= Cuda-Undistort =
= Cuda-Undistort =


Please follow the [[CUDA Accelerated GStreamer Camera Undistort/User Guide/Camera Calibration | Camera Calibration section]]
For more information on how to use the cuda-undistort please follow the [[CUDA Accelerated GStreamer Camera Undistort/User Guide/Camera Calibration | Camera Calibration section]]


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Calibration|User Guide/Additional details}}
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Calibration|User Guide/Additional details}}
</noinclude>
</noinclude>
335

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.