RidgeRun image Stitching for NVIDIA Jetson

From RidgeRun Developer Wiki

Follow Us On Twitter LinkedIn Email Share this page


Previous: RidgeRun Products/Rtsp Server Index Next: RidgeRun Products/Video Stabilization








Overview of RidgeRun image Stitching for NVIDIA Jetson

RidgeRun’s Image Stitching for NVIDIA Jetson is a high-performance solution that merges multiple input images into a seamless panorama. By leveraging the GPU, it delivers real-time results even with high-resolution images and multi-camera setups. The product integrates as a GStreamer element, making it easy to include in custom pipelines and combine with downstream image processing or computer vision algorithms.

Key Capabilities:

  • Panoramic Image Stitching: Transforms and merges overlapping images into a single, borderless panorama, enabling immediate use for further processing or display.
  • 360-Degree Image Stitching: Designed for immersive applications such as virtual reality, augmented reality, virtual tours, cartography, and simulation. It provides users with a true sense of presence in digital environments.
  • Real-Time Performance: Optimized with CUDA, the solution scales to multiple inputs and high resolutions without compromising performance.


The image stitching algorithm is composed of several key stages:

  1. Feature Matching – Identifies and matches keypoints between overlapping images (e.g., using SIFT) to determine proper alignment.
  2. Homographies – Establishes geometric transformations between images, represented by a homography matrix that accounts for translation, rotation, shear, and scale.
  3. Warping – Applies the calculated transformations to align images in a shared reference frame before combining them into a larger composite.
  4. Blending – Smooths out differences in overlapping regions, minimizing artifacts from varying camera settings (e.g., exposure or gain).

Applications:

  • Multi-camera vision systems
  • Virtual reality and augmented reality experiences
  • Cartography and geographic visualization
  • Education, marketing, and scientific simulations
  • High-quality panoramic photography and video

With support for panoramic and 360-degree stitching of up to N inputs, RidgeRun’s solution combines creativity and technology to enable new opportunities in imaging, simulation, and immersive content creation.

Supported platforms

The following hardware platforms are currently supported:

  • PC (x86 / x64).
  • NVIDIA Jetson boards: Orin, TX1, TX2, Xavier AGX, Xavier NX and Nano.

Capabilities

The stitcher element supports raw video in the following formats:

Input

  • RGBA
  • GRAY8

Output

  • RGBA
  • GRAY8

Examples

RidgeRun provides two main types of video stitching using GStreamer:

  • 360 stitching
  • Multi-Stream Stitching

These stitching methods can be achieved by combining the GstProjector and GstStitcher elements. All examples use memory type NVMM and are tested in performance mode using jetson_clocks.sh on NVIDIA Jetson platforms.

Resolutions and Definitions:

  • HD: 1920×1080 (WIDTH=1920, HEIGHT=1080)
  • Source format: INPUT_# refers to the input video files
  • Sink format: FILESINK is the name of the output file
  • L#: Lens identifier
  • R#: Lens radius
  • CY: Center position on the Y-axis
  • CX#: Center position on the X-axis
  • RX#: Rotation around the X-axis
  • RY#: Rotation around the Y-axis
  • RZ#: Rotation around the Z-axis

The values L#, R#, CY, CX#, RX#, RY#, and RZ# are obtained from the calibration process and are essential for accurate stitching and projection using GstProjector.

360 stitching

Using GstProjector and GstStitcher, it is possible to take the input from two fisheye cameras and generate a 360-degree stitched video.

Before running the stitching pipeline, you must first generate a calibration file using RidgeRun’s calibration tool. This file provides the lens distortion correction parameters necessary for proper projection and blending.

Once you have the calibration file, you can use the following GStreamer pipeline to stitch the fisheye inputs and produce a 360° projection.

gst-launch-1.0  cudastitcher name=stitcher \
	homography-list="`cat result.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
	filesrc location="$INPUT_0" ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=$WIDTH,height=$HEIGHT" ! rrfisheyetoeqr radius=$R0 lens=$L0 center-x=$CX0 center-y=$CY0 rot-x=$RX0 rot-y=$RY0 rot-z=$RZ0 name=proj0 !  queue ! stitcher.sink_0 \
	filesrc location="$INPUT_1" ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=$WIDTH,height=$HEIGHT" ! rrfisheyetoeqr radius=$R1 lens=$L1 center-x=$CX1 center-y=$CY1 rot-x=$RX1 rot-y=$RY1 rot-z=$RZ1 name=proj1 !  queue ! stitcher.sink_1 \
	stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM),width=$WIDTH,height=$HEIGHT" ! nvv4l2h264enc bitrate=30000000 ! h264parse ! queue !  qtmux ! filesink location=$OUTPUT -e

The output will provide a seamless panoramic view based on the fisheye inputs.


Multiple streams stitching

The GstStitcher also supports stitching multiple input video streams into a single, larger frame. This is useful for surveillance systems, tiled displays, or benchmarking video throughput.

In the following example, six video sources are stitched together into a single composite image.

gst-launch-1.0 -ev cudastitcher name=stitcher homography-list="`cat homography.json | tr -d "\n" | tr -d " "`" \
    filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_0 \
	filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_1 \
    filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_2 \
    filesrc location=$INPUT_3 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_3 \
    filesrc location=$INPUT_4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_4 \
    filesrc location=$INPUT_5 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=$WIDTH, height=$HEIGHT, format=RGBA" ! queue ! stitcher.sink_5 \
   stitcher. !  perf ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=2218, height=1080, format=I420" ! queue ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=$FILESINK

The output below illustrates the stitched output of 6 input streams.


Thor Performance

The performance obtained by this element is plotted in the following table for different resolutions. There you can compare the FPS, GPU% and CPU% usage.

Table 1: Performance of RidgeRun's Video Stitching
Resolution Video stitching CPU (%) GPU (%) FPS
HD 360 stitching
Multi-stream stitching 1.95626 20.36 25.251
4K 360 stitching
Multi-stream stitching 0.92878 25.25 4.501

Getting Started

To know more about the extension, please refer to the Image Stitching Getting the Code wiki page.

How to Purchase



For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.


Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.




Previous: RidgeRun Products/Rtsp Server Index Next: RidgeRun Products/Video Stabilization