Image Stitching for NVIDIA Jetson/User Guide/Gstreamer: Difference between revisions

From RidgeRun Developer Wiki
No edit summary
 
(15 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{Ambox
|image=[[File:underconstruction.png|50px]]
|type=notice
|issue='''WORK IN PROGRESS. Please [https://www.ridgerun.com/contact <U>Contact</U>] RidgeRun OR email to [mailto:support@ridgerun.com <u>support@ridgerun.com</u>] if you have any questions.'''
}}


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Head|next=User Guide/Additional details|previous=User Guide/Calibration|metakeywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
{{Image_Stitching_for_NVIDIA_Jetson/Head|next=User Guide/Additional Details|previous=User Guide/Calibration|metakeywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
</noinclude>
</noinclude>


Line 13: Line 8:
This page provides a basic description of the parameters to build a cudastitcher pipeline.
This page provides a basic description of the parameters to build a cudastitcher pipeline.


= Cuda-Stitcher =
== Cuda-Stitcher ==


To build a cudastitcher pipeline use the following parameters:
To build a cudastitcher pipeline use the following parameters:


*'''homography-list'''
*'''homography-list'''
List of homographies as a JSON formatted string without spaces or newlines. The homography list can be store in a JSON file and used in the pipeline with the following format:
List of homographies as a JSON formatted string without spaces or newlines. The homography list can be store in a JSON file and used in the pipeline with the following format:


Line 27: Line 21:
The [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Gstreamer#JSON_file| JSON file section]] provides a detailed explanation of the JSON file format for the homography list.
The [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Gstreamer#JSON_file| JSON file section]] provides a detailed explanation of the JSON file format for the homography list.


*'''stitcher.sink'''
*'''pads'''
The stitcher element can crop the borders of the individual images to reduce the overlap region to reduce ghosting or remove unwanted borders. Individual crop parameters for each image can be configured on the GstStitcherPad, you must take into account that the crop is applied to the input image before applying the homography, so the crop areas are in pixels from the input image. Following is the list of properties available for the stitcher pad:


The indices of the stitcher's sinks (sink_0, for example) are mapped directly to the image index we use in the homography list.
*'''bottom:''' amount of pixels to crop at the bottom of the image.
*'''left:''' amount of pixels to crop at the left side of the image.
*'''right:''' amount of pixels to crop at the right side of the image.
*'''top:''' amount of pixels to crop at the top side of the image.  


*'''camera-matrix'''
The pads are used in the pipelines with the following format:
 
Definition of the camera matrix (only if undistort is necessary). The format is the following:


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
"{"fx": double, "fy": double,  "cx": double,  "cy": double}"
sink_<index>::right=<crop-size> sink_<index>::left=<crop-size> sink_<index>::top=<crop-size> sink_<index>::bottom=<crop-size>
</syntaxhighlight>
</syntaxhighlight>


*'''distortion-model'''
You can find examples [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Gstreamer#Controlling_the_Overlap|here]].
 
*'''sink'''
The sink marks the end of each camera capture pipeline and maps each of the cameras to the respective image index of the homography list.
 
=== Pipeline Basic Example ===


Distortion model to use (only if undistort is necessary). The options are: (0)brown-conrady or (1)fisheye.
The pipelines construction examples assume that the homographies matrices are stored in the <code> homographies.json</code> file and contains 1 homography for 2 images.


*'''distortion-parameters'''
==== Case: 2 Cameras ====


Definition of the distortion matrix (only if undistort is necessary). The format is the following:
To perform and display image stitching from two camera sources the pipeline should look like:


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
"{"k1": double, "k2": double,  "p1": double, "p2": double, "k3": double, "k4": double, "k5": double, "k6": double}"
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! nvvidconv ! nvoverlaysink
</syntaxhighlight>
</syntaxhighlight>


== JSON file ==
==== Case: 2 Video Files ====
 
To perform and display image stitching from two video sources the pipeline should look like:
 
<syntaxhighlight lang=bash>
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=video_0.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  filesrc location=video_1.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink
</syntaxhighlight>
 
=== JSON file ===


The homography list is a JSON formatted string that defines the transformations and relationships between the images. Here we will explore (with examples) how to create this file in order to stitch the corresponding images.
The homography list is a JSON formatted string that defines the transformations and relationships between the images. Here we will explore (with examples) how to create this file in order to stitch the corresponding images.


=== Case: 2 Images ===
==== Case: 2 Images ====
[[File:Stitching 2 images example.gif|500px|frameless|none|2 Images Stitching Example]]
[[File:Stitching 2 images example.gif|500px|frameless|none|2 Images Stitching Example|alt=Gif describing how stitching works]]
------
------


Let's assume we only have 2 images (with indices 0 and 1). These 2 images are related by a '''homography''' which can be computed using the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation | Homography Estimation Tool]]. The computed homography transforms the '''Target''' image from the '''Reference''' image perspective.
Let's assume we only have 2 images (with indices 0 and 1). These 2 images are related by a '''homography''' which can be computed using the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Calibration | Calibration Tool]]. The computed homography transforms the '''Target''' image from the '''Reference''' image perspective.


This way, to fully describe a homography, we need to declare 3 parameters:
This way, to fully describe a homography, we need to declare 3 parameters:
Line 85: Line 102:
}
}
</pre>
</pre>


With this file, we are describing a pair of images (0 and 1), where the given matrix will transform the image '''1''' based on '''0'''.
With this file, we are describing a pair of images (0 and 1), where the given matrix will transform the image '''1''' based on '''0'''.


=== Case: 3 Images ===
==== Case: 3 Images ====


[[File:3 Images Stitching Example.gif|1000px|frameless|none|3 Images Stitching Example]]
[[File:3 Images Stitching Example.gif|1000px|frameless|none|3 Images Stitching Example|alt=Gif describing how stitching works]]
------
------


Line 128: Line 144:
</pre>
</pre>


=== Your case ===
==== Your case ====
You can create your own homography list, using the other cases as a guide. Just keep in mind these rules:
You can create your own homography list, using the other cases as a guide. Just keep in mind these rules:


Line 136: Line 152:
# '''Image indices from 0 to N-1''': if you have N images, you have to use consecutive numbers from '''0''' to '''N-1''' for the target and reference indices. It means that you cannot declare something like <code>target: 6</code> if you have 6 images; the correct index for your last image is  '''5'''.
# '''Image indices from 0 to N-1''': if you have N images, you have to use consecutive numbers from '''0''' to '''N-1''' for the target and reference indices. It means that you cannot declare something like <code>target: 6</code> if you have 6 images; the correct index for your last image is  '''5'''.


== Basic Pipelines ==
== Controlling the Overlap ==
The following tool will provide simple pipelines according to the selected elements.
You can set cropping areas for each stitcher input (sink pad). This will allow you to:
# Crop an input image without requiring processing time.
# Reduce the overlapping area between the images to avoid ghosting effects.


The generated pipelines use our '''perf element''' to measure the performance of the stitcher. It can be downloaded from
Each input to the stitcher (or sink pad) has 4 properties (<code>left right top bottom</code>). Each of these defines the amount of pixels to be cropped in each direction. Say you want to configure the first pad (<code>sink_0</code>), and also want to crop 64 pixels of the bottom, then you can do it as follows inside your pipeline:
[https://github.com/RidgeRun/gst-perf this repository], otherwise, the element can be removed from the pipeline without any problem. In case of performance issues, consider executing the '''/usr/bin/jetson_clocks''' binary.
<pre>
gst-launch-1.0 cudastitcher sink_0::left=64 ...
</pre>


{{Ambox
Now let's take a look at more visual examples.
| type      = notice;
| style    = width: 80%;
| textstyle = color: black;
| text      = Watch the input resolution so that the ''nvv4l2hh264enc'' encoder can handle the stitching result. This element works with inputs no larger than 4096x4096.
}}


<html>
'''Example 1: No Cropping'''
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
* {
  box-sizing: border-box;
}
.button {
  background-color: #428BCB;
  border: 1px solid #428BCB;
  color: white;
  padding: 10px 15px;
  text-align: center;
  text-decoration: none;
  display: inline-block;
  font-size: 16px;
  margin: 2px 2px;
  cursor: pointer;
  border-radius: 10px;
}


input[type=text], select, textarea {
Let's start with a case with no ROIs. We won't configure any pad in this case since 0 is the default of each property (meaning no cropping is done)
  width: 100%;
<pre>
  padding: 8px;
gst-launch-1.0 cudastitcher ! ...
  border: 1px solid #ccc;
</pre>
  background-color: WhiteSmoke;
  border-radius: 5px;
  resize: vertical;
}


input[type=number], select, textarea {
[[File:Stitching blocks without ROI.png|800px|frameless|center|No Cropping|alt=Diagram explaining the no cropping in stitching]]
  width: 100%;
  padding: 8px;
  border: 1px solid #ccc;
  background-color: WhiteSmoke;
  border-radius: 5px;
  resize: vertical;
}


label {
'''Example 2: left/right Cropping'''
  padding: 12px 12px 12px 0;
  display: inline-block;
}


input[type=submit] {
This hypothetical case shows how to reduce the overlap between two images. So let's crop 200 pixels from the blue image and 200 pixels from the red image. The overlapping area is represented in purple, this is also the area over which we will apply blending to create a seamless transition between both images.
  background-color: #4CAF50;
<pre>
  color: white;
gst-launch-1.0 cudastitcher sink_0::right=200 sink_1::left=200 ! ...
  padding: 12px 20px;
</pre>
  border: none;
[[File:Stitching blocks left-right ROIs.png|800px|frameless|center|Left / Right Cropping|alt=Diagram explaining the left/right cropping on stitching]]
  border-radius: 4px;
  cursor: pointer;
  float: right;
}


input[type=submit]:hover {
'''Example 3: top/bottom Cropping'''
  background-color: #45a049;
}


.container {
In this case, we are cropping the 200 bottom pixels of the red image, and 200 top pixels of the blue image
  border-radius: 5px;
<pre>
  background-color: #DDE8EC;
gst-launch-1.0 cudastitcher sink_0::bottom=200 sink_1::top=200 ! ...  
  border: 1px solid #ccc;
</pre>
  padding: 10px;
  width: 70%;
}
 
.col-10 {
  float: left;
  width: 10%;
  margin-top: 6px;
}
 
.col-25 {
  float: left;
  width: 25%;
  margin-top: 6px;
  margin-left: 6px;
}
 
.col-50 {
  float: left;
  width: 50%;
  margin-top: 6px;
}
 
.col-75 {
  float: left;
  width: 75%;
  margin-top: 6px;
}
 
/* Clear floats after the columns */
.row:after {
  content: "";
  display: table;
  clear: both;
}
 
/* Responsive layout - when the screen is less than 600px wide, make the two columns stack on top of each other instead of next to each other */
@media screen and (max-width: 600px) {
  .col-25, .col-50, .col-75, input[type=submit] {
    width: 100%;
    margin-top: 0;
  }
}
 
// Material Select Initialization
$(document).ready(function() {
$('.mdb-select').materialSelect();
});
 
</style>
</head>
 
<body>
 
<br>
<div class="container">
  <form action="/action_page.php" id="gen_form">
    <!-- Ask for type of inputs -->
    <div class="row">
      <div class="col-10">
        <label for="input_type"><b>Inputs</b></label>
      </div>
      <div class="col-25">
        <select id="input_type" name="input_type">
          <option value="" disabled selected>Select the input type</option>
          <option value="camera"> Cameras </option>
          <option value="video"> Video files </option>
        </select>
      </div>
      <div class="col-25">
        <input type="number" min="2" max="32" id="num_inputs" name="num_inputs" placeholder="How many">
      </div>
      <div class="col-25">
        <select id="colorspace" name="colorspace">
          <option value="" disabled selected>Color space</option>
          <option value="RGBA">RGBA</option>
          <option value="GRAY8">GRAY8</option>
        </select>
      </div>
    </div>
    <div class="row">
      <div class="col-10">
        <label for="input_type"><b></b></label>
      </div>
      <div class="col-25">
        <input type="number" min="0" id="width" name="width" placeholder="Width (ex. 1920)">
      </div>
      <div class="col-25">
        <input type="number" min="0" id="height" name="height" placeholder="Height (ex. 1080)">
      </div>
    </div>
    <!-- Ask for output type -->
    <br>
    <hr>
    <div class="row">
      <div class="col-10">
        <b>Output</b>
      </div>
      <div class="col-25">
        <input type="radio" name="output_type" value="save" onclick="enable_save()"> Save video
        <br>
        <input type="radio" name="output_type" value="show" onclick="disable_save()"> Show on screen
      </div>
      <div class="col-25">
        <input type="text" id="output_filename" name="output_filename" placeholder="Insert the output filename" disabled=true>
      </div>
      <div class="col-25">
        <select id="output_format" name="output_format" disabled=true>
          <option value="" disabled selected>Video format</option>
          <option value="mp4">mp4</option>
          <option value="ts">ts</option>
        </select>
      </div>
    </div>
  </form>
</div>
<br>
<button type="button" class="button" onclick="generatePipeline()">Generate pipe</button>
<br>
 
<!-- SCRIPT -->
<script>
 
var num_inputs  = document.getElementById("num_inputs").value
var input_type  = document.getElementById("input_type").value
var colorspace  = document.getElementById("colorspace").value
var width      = document.getElementById("width").value
var height      = document.getElementById("height").value
 
var output_type = getRadiosOption("output_type")
var output_filename = document.getElementById("output_filename").value
var output_format = document.getElementById("output_format").value
 
function generatePipeline() {
  // Refresh selected variables
  refreshUserVariables();
  // Get pipeline box element
  var pipeline = document.getElementById("pipeline");
 
  // Progressively, add the pipeline elements
  var pipeStr = "";
  pipeStr += getInputsEnvVar();
  pipeStr += getOutputEnvVar();
  pipeStr += "\ngst-launch-1.0 -e cudastitcher name=stitcher \\\n";
  pipeStr += getHomographySettings();
  pipeStr += getInputBranches();
  pipeStr += getOutputBranch();
 
  pipeline.innerHTML = pipeStr;
  pipeline.style.display = "block";
}
 
function getRadiosOption(inputName) {
  var radios = document.getElementsByName(inputName);
  for (var i = 0; i < radios.length; i++) {
    if (radios[i].checked) {
      return radios[i].value;
      break;
    }
  }
  return "";
}
 
function refreshUserVariables() {
input_type  = document.getElementById("input_type").value
num_inputs  = document.getElementById("num_inputs").value
colorspace  = document.getElementById("colorspace").value
width      = document.getElementById("width").value
height      = document.getElementById("height").value
 
output_type    = getRadiosOption("output_type")
output_filename = document.getElementById("output_filename").value
output_format  = document.getElementById("output_format").value
}
 
function getHomographyEnvVars() {
  tmp = "";
  switch (num_inputs){
    case "2":
      tmp += "HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n";
      break;
    case "3":
      tmp += "LC_HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n\nRC_HOMOGRAPHY=";
      tmp += getHomographyMatrix();
      tmp += "\n";
      break;
  }
  return tmp;
}
 
function getHomographyMatrix() {
  tmp = "";
  tmp += "\"{\\\n"
  tmp += "  \\\"h00\\\": 7.3851e-01, \\\"h01\\\": 1.0431e-01, \\\"h02\\\": 1.4347e+03, \\\n";
  tmp += "  \\\"h10\\\":-1.0795e-01, \\\"h11\\\": 9.8914e-01, \\\"h12\\\":-9.3916e+00, \\\n";
  tmp += "  \\\"h20\\\":-2.3449e-04, \\\"h21\\\": 3.3206e-05, \\\"h22\\\": 1.0000e+00}\"";
  return tmp;
}
 
function getOutputEnvVar() {
  if (output_type == "save"){
    return "\nOUTPUT=" + output_filename + "." + output_format + "\n";
  }
  return "";
}
 
function getInputsEnvVar() {
  tmp = ""
  if (input_type == "video"){
    for (var i=0; i<num_inputs; i++){
      tmp += "\nINPUT_" + i + "=video_" + i + ".mp4";
    }
    tmp += "\n";
  }
  return tmp;
}
 
function getHomographySettings() {
  hSettings = "homography-list=\"`cat homographies.json | tr -d \"\\n\" | tr -d \" \"`\" \\\n"
 
  return hSettings;
}
 
function getInputBranches() {
  errors = "";
  if (input_type == "")
    errors += "  **PLEASE SELECT INPUT TYPE**\n";
  if (num_inputs == "")
    errors += "  **PLEASE SELECT NUMBER OF INPUTS**\n";
  if (width == "")
    errors += "  **PLEASE SELECT WIDTH**\n";
  if (height == "")
    errors += "  **PLEASE SELECT HEIGHT**\n";
  if (errors != "") return errors
 
  iQueues = "";
 
  for (var i=0; i<num_inputs; i++){
    switch(input_type){
      case "camera":
        src = "  nvarguscamerasrc maxperf=true sensor-id="+i;
        break;
      case "video":
        src = "  filesrc location=$INPUT_"+ i + " ! tsdemux ! h264parse ! nvv4l2decoder";
        break;
      default:
        return "  **ERROR: input type not understood**\n";
    }
    iQueues += src + " ! ";
    iQueues += "nvvidconv ! ";
    iQueues += getVideoCaps();
    iQueues += " ! queue ! stitcher.sink_" + i + " \\\n";
  }
  return iQueues;
}
 
function getVideoCaps() {
  tmp = "\"video/x-raw";
  if (colorspace == "RGBA"){
    tmp += "(memory:NVMM)";
  }
  tmp += ", width=" + width + ", height=" + height + ", format=" + colorspace + "\"";
  return tmp;
}
 
function getOutputBranch() {
  error = "";
  if (output_type == "") error += "  **PLEASE SELECT OUTPUT TYPE**\n";
  if (output_type == "save"){
    if (output_filename == "") error += "  **PLEASE SELECT OUTPUT FILENAME**\n";
    if (output_format == "") error += "  **PLEASE SELECT OUTPUT FORMAT**\n";
  }
  if (error != ""){
    return error;
  }
 
  switch(output_type){
    case "save":
      var mux;
      if (output_format == "mp4") mux = "mp4mux";
      else if (output_format == "ts") mux = "mpegtsmux";
      else return "**FORMAT NOT UNDERSTOOD**";
   
      return "  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! " + mux + " ! filesink location=$OUTPUT"
    case "show":
      return "  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink "
    default:
      return "DEFAULT";
  }
}
 
function enable_save() {   
  enable_element("output_filename");
  enable_element("output_format");
}
 
function disable_save() {   
  disable_element("output_filename");
  disable_element("output_format");
}
 
function enable_element(element) {
  document.getElementById(element).disabled=false;
}
 
function disable_element(element) {
  document.getElementById(element).disabled=true;
}
 
</script>
 
<!-- FINAL PIPELINE TEXT -->
<br>
<textarea id="pipeline" name="pipeline" cols="140" rows="20"></textarea>


</body>
[[File:Stitching blocks top-bottom ROIs.png|800px|frameless|center|Top / Bottom cropping|alt=Diagram explaining the bottom/top cropping on stitching]]
</html>


== Projector ==


= Projector =
For more information on how to use the rr-projector please follow the [[RidgeRun Image Projector/User Guide/Rrprojector | Projector User Guide]]


= Cuda-Undistort =
== Cuda-Undistort ==


Please follow the [[CUDA Accelerated GStreamer Camera Undistort/User Guide/Camera Calibration | Camera Calibration section]]
For more information on how to use the cuda-undistort please follow the [[CUDA Accelerated GStreamer Camera Undistort/User Guide/Camera Calibration | Camera Calibration section]]


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Calibration|User Guide/Additional details}}
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Calibration|User Guide/Additional Details}}
</noinclude>
</noinclude>

Latest revision as of 15:24, 3 October 2024



Previous: User Guide/Calibration Index Next: User Guide/Additional Details






This page provides a basic description of the parameters to build a cudastitcher pipeline.

Cuda-Stitcher

To build a cudastitcher pipeline use the following parameters:

  • homography-list

List of homographies as a JSON formatted string without spaces or newlines. The homography list can be store in a JSON file and used in the pipeline with the following format:

homography-list="`cat <json-file> | tr -d "\n" | tr -d " "`"

The JSON file section provides a detailed explanation of the JSON file format for the homography list.

  • pads

The stitcher element can crop the borders of the individual images to reduce the overlap region to reduce ghosting or remove unwanted borders. Individual crop parameters for each image can be configured on the GstStitcherPad, you must take into account that the crop is applied to the input image before applying the homography, so the crop areas are in pixels from the input image. Following is the list of properties available for the stitcher pad:

  • bottom: amount of pixels to crop at the bottom of the image.
  • left: amount of pixels to crop at the left side of the image.
  • right: amount of pixels to crop at the right side of the image.
  • top: amount of pixels to crop at the top side of the image.

The pads are used in the pipelines with the following format:

sink_<index>::right=<crop-size> sink_<index>::left=<crop-size> sink_<index>::top=<crop-size> sink_<index>::bottom=<crop-size>

You can find examples here.

  • sink

The sink marks the end of each camera capture pipeline and maps each of the cameras to the respective image index of the homography list.

Pipeline Basic Example

The pipelines construction examples assume that the homographies matrices are stored in the homographies.json file and contains 1 homography for 2 images.

Case: 2 Cameras

To perform and display image stitching from two camera sources the pipeline should look like:

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! nvvidconv ! nvoverlaysink

Case: 2 Video Files

To perform and display image stitching from two video sources the pipeline should look like:

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=video_0.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
  filesrc location=video_1.mp4 ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink

JSON file

The homography list is a JSON formatted string that defines the transformations and relationships between the images. Here we will explore (with examples) how to create this file in order to stitch the corresponding images.

Case: 2 Images

Gif describing how stitching works
2 Images Stitching Example

Let's assume we only have 2 images (with indices 0 and 1). These 2 images are related by a homography which can be computed using the Calibration Tool. The computed homography transforms the Target image from the Reference image perspective.

This way, to fully describe a homography, we need to declare 3 parameters:

  • Matrix: the 3x3 transformation matrix.
  • Target: the index of the target image (i.e. the image to be transformed).
  • Reference: the index of the reference image (i.e. the image used as a reference to transform the target image).

Having this information, we build the Homography JSON file:

{
    "homographies":[
        {
            "images":{
                "target":1,
                "reference":0
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}

With this file, we are describing a pair of images (0 and 1), where the given matrix will transform the image 1 based on 0.

Case: 3 Images

Gif describing how stitching works
3 Images Stitching Example

Similar to the 2 images case, we use homographies to connect the set of images. The rule is to use N-1 homographies, where N is the number of images.

One panoramic use case is to compute the homographies for both left (0) and right (2) images, using the center image (1) as the reference. The hoography list JSON file would look like this:

{
    "homographies":[
        {
            "images":{
                "target":0,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": -510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        },
        {
            "images":{
                "target":2,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}

Your case

You can create your own homography list, using the other cases as a guide. Just keep in mind these rules:

  1. N images, N-1 homographies: if you have N input images, you only need to define N-1 homographies.
  2. Reference != Target: you can't use the same image as a target and as a reference for a given homography.
  3. No Target duplicates: an image can be a target only once.
  4. Image indices from 0 to N-1: if you have N images, you have to use consecutive numbers from 0 to N-1 for the target and reference indices. It means that you cannot declare something like target: 6 if you have 6 images; the correct index for your last image is 5.

Controlling the Overlap

You can set cropping areas for each stitcher input (sink pad). This will allow you to:

  1. Crop an input image without requiring processing time.
  2. Reduce the overlapping area between the images to avoid ghosting effects.

Each input to the stitcher (or sink pad) has 4 properties (left right top bottom). Each of these defines the amount of pixels to be cropped in each direction. Say you want to configure the first pad (sink_0), and also want to crop 64 pixels of the bottom, then you can do it as follows inside your pipeline:

gst-launch-1.0 cudastitcher sink_0::left=64 ...

Now let's take a look at more visual examples.

Example 1: No Cropping

Let's start with a case with no ROIs. We won't configure any pad in this case since 0 is the default of each property (meaning no cropping is done)

gst-launch-1.0 cudastitcher ! ... 
Diagram explaining the no cropping in stitching
No Cropping

Example 2: left/right Cropping

This hypothetical case shows how to reduce the overlap between two images. So let's crop 200 pixels from the blue image and 200 pixels from the red image. The overlapping area is represented in purple, this is also the area over which we will apply blending to create a seamless transition between both images.

gst-launch-1.0 cudastitcher sink_0::right=200 sink_1::left=200 ! ... 
Diagram explaining the left/right cropping on stitching
Left / Right Cropping

Example 3: top/bottom Cropping

In this case, we are cropping the 200 bottom pixels of the red image, and 200 top pixels of the blue image

gst-launch-1.0 cudastitcher sink_0::bottom=200 sink_1::top=200 ! ... 
Diagram explaining the bottom/top cropping on stitching
Top / Bottom cropping

Projector

For more information on how to use the rr-projector please follow the Projector User Guide

Cuda-Undistort

For more information on how to use the cuda-undistort please follow the Camera Calibration section


Previous: User Guide/Calibration Index Next: User Guide/Additional Details