Image Stitching for NVIDIA Jetson/User Guide/Additional Details: Difference between revisions

From RidgeRun Developer Wiki
m (Add CUDA interpolation algorithm known issue)
 
(11 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=User Guide/Blending|next=Examples|keywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=User Guide/Gstreamer|next=Resources|metakeywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
</noinclude>
</noinclude>
{{DISPLAYTITLE:Stitcher element Additional Details|noerror}}


== Additional Details ==
== Additional Details ==
This wiki holds some additional notes and useful details to consider when using the stitcher element.
This wiki holds some additional notes and useful details to consider when using the stitcher element.
   
   
=== 360 Video ===
In the case of 360 video stitching, the output video file can be displayed as a 360 degree video. The following command writes the spherical tag in the video file:
<syntaxhighlight lang=bash>
exiftool -XMP-GSpherical:Spherical="true" file.mp4
</syntaxhighlight>
With the spherical tag video players like VLC will recognize and display the video as a 360 degree video.
=== Video Cropping ===
=== Video Cropping ===
The stitcher element applies multiple transformations to the target sources in order to achieve a seamless match with the target. These transformations can in some cases distortion the output resolution and provide an image that is a different size and shape than desired. The image below represents an example of this behavior:
The stitcher element applies multiple transformations to the target sources in order to achieve a seamless match with the target. These transformations can in some cases distort the output resolution and provide an image that is a different size and shape than desired. The image below represents an example of this behavior:


[[File:Stitcher_videocrop_before.png|300px|frameless|none|before videocrop example]]
[[File:Stitcher_videocrop_before.png|300px|frameless|none|before videocrop example]]
Line 17: Line 30:
Using videocrop after the stitcher on your pipeline can reduce the output to only what is needed. The original image can then turn into:
Using videocrop after the stitcher on your pipeline can reduce the output to only what is needed. The original image can then turn into:


[[File:Stitcher_videocrop_after.png|300px|frameless|none|after videocrop example]]
[[File:Stitcher_videocrop_after.png|300px|frameless|none|after videocrop example|alt=Diagram explaining cropping]]
 
==== Integration with the homography estimation tool ====
 
The homography estimation tool by default limits its output to a resolution keeps its height the same as the original image and the width is cropped at the addition between input widths; for example, stitching two 1920x1080 images, produces an output with dimensions width=(1920+1920)=3840; height=1080.
 
If you want the output for the stitcher element to look the same as the homography estimation, you should add a videocrop element at the output and determine its parameters like so:
 
To get the complete output you can run the homography estimation tool using the --non_crop option, this will you the complete image so from there it is easier to decide what you want to keep and from what direction to crop. Using this option the output you get in result.jpg will have the dimensions of the complete image.
 
Then you can run the tool again without the --non-crop option; the resulting resolution will be as explained before (same height, widths added).


If you want to crop the output for the stitcher element you should add a videocrop element at the output and determine its parameters.


From these two runs, we have the following information:
Obtain the following information:
* <math>width_{full}</math>: the full width of the image without cropping; same as the stitcher element.
* <math>width_{full}</math>: the full width of the image without cropping; same as the stitcher element.
* <math>height_{full}</math>: the full height of the image without cropping; same as the stitcher element.
* <math>height_{full}</math>: the full height of the image without cropping; same as the stitcher element.
* <math>width_{crop}</math>: the cropped width of the image, same as the desired output.
* <math>width_{crop}</math>: the cropped width of the image, the same as the desired output.
* <math>height_{crop}</math>: the cropped height of the image, same as the desired output.
* <math>height_{crop}</math>: the cropped height of the image, the same as the desired output.




Line 45: Line 49:




* '''left''': <math>(width_{full}-width_{crop})</math> if mode is <code>right_fixed</code>; 0 otherwise.
* '''left''': <math>(width_{full}-width_{crop})</math>.




* '''right''': <math>(width_{full}-width_{crop})</math> if mode is <code>left_fixed</code>; 0 otherwise.
* '''right''': <math>(width_{full}-width_{crop})</math>.




The formulas for top and bottom parameters assume that the transformation produced deformation symmetrically in the vertical dimension; that is usually the case if cameras are parallel to one another; however depending on your use case; you might need to adjust the proportion between top and bottom accordingly.
The formulas for top and bottom parameters assume that the transformation produced deformation symmetrically in the vertical dimension; that is usually the case if cameras are parallel to one another; however, depending on your use case; you might need to adjust the proportion between the top and bottom accordingly.


That 50/50 assumption is usually a good place to start and adjust from there according to the output.
That 50/50 assumption is usually a good place to start and adjust from there according to the output.
=== Known Issues ===
From working with multiple clients and integrating the stitcher into their environments we have gathered the following information regarding some limitations for the element:
The stitcher is a project under constant development, and this issues are likely to be solved in the future; if you would like to sponsor the development of a new feature or enhancment please get in touch at [mailto:contactus@ridgerun.com '''<u>contactus@ridgerun.com</u>'''].
==== CUDA interpolation algorithm illegal memory access ====
Currently there is an issue inside CUDA that produces an illegal memory access for some homographies, this issue is dependent on the homography values as well as the interpolation algorithm being used.
You can find more information about this issue here: https://github.com/opencv/opencv_contrib/issues/2361
This produces an error in the element that looks like the following:
<syntaxhighlight lang="bash">
illegal memory access was encountered in function 'waitForCompletion'
</syntaxhighlight>
If you get this error you can change the interpolation algorithm with the <code>interpolation-algorithm</code> property to <code>nearest</code>. This will affect the quality of the image warping but will avoid the issue; another option is to change the homography slightly and see what value is producing the error.
==== Deepstream NVMM integration====
The stitcher element works with NVMM buffers for dma acceleration; therefore it can integrate with other elements that handle this <code>video/x-raw(Memory:NVMM)</code> media format. However, there is an issue when working with NVIDIA's deepstream elements; because even though those elements use the same caps (<code>video/x-raw(Memory:NVMM)</code>) as the stitcher (so the negotiation occurs as expected) the underlying format is different and produces a runtime error.
   
   


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Blending|Examples}}
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide/Gstreamer|Resources}}
</noinclude>
</noinclude>

Latest revision as of 15:27, 3 October 2024



Previous: User Guide/Gstreamer Index Next: Resources






Additional Details

This wiki holds some additional notes and useful details to consider when using the stitcher element.

360 Video

In the case of 360 video stitching, the output video file can be displayed as a 360 degree video. The following command writes the spherical tag in the video file:

exiftool -XMP-GSpherical:Spherical="true" file.mp4

With the spherical tag video players like VLC will recognize and display the video as a 360 degree video.

Video Cropping

The stitcher element applies multiple transformations to the target sources in order to achieve a seamless match with the target. These transformations can in some cases distort the output resolution and provide an image that is a different size and shape than desired. The image below represents an example of this behavior:

before videocrop example
before videocrop example

To crop the stitcher's output into the expected resolutions Gstreamer's videocrop element can be used. This element takes 4 main parameters (top, bottom, left, right) specifying the number of pixels to remove from each side respectively.

The basic idea for determining these parameters is to take the original (without cropping) output of the stitcher, and based on its dimensions, subtract from each side accordingly.

Using videocrop after the stitcher on your pipeline can reduce the output to only what is needed. The original image can then turn into:

Diagram explaining cropping
after videocrop example

If you want to crop the output for the stitcher element you should add a videocrop element at the output and determine its parameters.

Obtain the following information:

  • : the full width of the image without cropping; same as the stitcher element.
  • : the full height of the image without cropping; same as the stitcher element.
  • : the cropped width of the image, the same as the desired output.
  • : the cropped height of the image, the same as the desired output.


From there the videocrop parameters can be determined:

  • top:


  • bottom:


  • left: .


  • right: .


The formulas for top and bottom parameters assume that the transformation produced deformation symmetrically in the vertical dimension; that is usually the case if cameras are parallel to one another; however, depending on your use case; you might need to adjust the proportion between the top and bottom accordingly.

That 50/50 assumption is usually a good place to start and adjust from there according to the output.


Previous: User Guide/Gstreamer Index Next: Resources