Image Stitching for NVIDIA Jetson/Examples/Other pipelines: Difference between revisions
m (Update navigation) |
mNo edit summary |
||
(20 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
<noinclude> | <noinclude> | ||
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=Examples/ | {{Image_Stitching_for_NVIDIA_Jetson/Head|previous=Examples/Using Gstd|next=Performance|metakeywords=Image Stitching, CUDA, Stitcher, Panorama}} | ||
</noinclude> | </noinclude> | ||
{{DISPLAYTITLE:Stitcher element few more example pipelines|noerror}} | |||
<table> | <table> | ||
Line 12: | Line 13: | ||
</table> | </table> | ||
This page showcases basic usage examples of the '''cudastitcher''' element, most of these pipelines can be automatically generated by this [ | This page showcases basic usage examples of the '''cudastitcher''' element, most of these pipelines can be automatically generated by this [[Image_Stitching_for_NVIDIA_Jetson/Examples/Basic_pipelines | Pipeline generator Tool]]. | ||
The homography list is stored in the <code> homographies.json</code> file and contains N-1 homographies for N images, for more information on how to set these values, visit | The homography list is stored in the <code> homographies.json</code> file and contains N-1 homographies for N images, for more information on how to set these values, visit [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Controlling_the_Stitcher | Controlling the Stitcher]] Wiki page. | ||
For all of the examples below assume that there are three inputs and the homographies file looks like this: | For all of the examples below assume that there are three inputs and the homographies file looks like this: | ||
Line 49: | Line 50: | ||
The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them. | The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them. | ||
The perf element is used in some of the examples, it can be downloaded from [https://github.com/RidgeRun/gst-perf | The perf element is used in some of the examples, it can be downloaded from [https://github.com/RidgeRun/gst-perf RidgeRun Git repository]; otherwise, the element can be removed from the pipeline without any issues. Also, in case of encountering performance issues, consider executing the '''/usr/bin/jetson_clocks''' binary. | ||
== Stitching from cameras == | == Stitching from cameras == | ||
Line 56: | Line 57: | ||
OUTVIDEO=/tmp/stitching_result.mp4 | OUTVIDEO=/tmp/stitching_result.mp4 | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" | homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ | ||
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ | |||
nvarguscamerasrc | nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ | ||
nvarguscamerasrc | nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ | ||
nvarguscamerasrc | stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO | ||
stitcher. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
=== Displaying a stitch === | === Displaying a stitch === | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" | homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ | ||
nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \ | nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \ | ||
nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \ | nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \ | ||
nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \ | nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \ | ||
stitcher. ! queue ! nvvidconv ! | stitcher. ! queue ! nvvidconv ! xvimagesink | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 84: | Line 81: | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" | homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ | ||
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ | |||
nvarguscamerasrc | nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ | ||
nvarguscamerasrc | stitcher. ! fakesink | ||
stitcher. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
=== Streaming via UDP+RTP === | === Streaming via UDP+RTP === | ||
Set the HOST variable to the Receiver's IP | Set the HOST variable to the Receiver's IP | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
HOST=127.0.0.1 | HOST=127.0.0.1 | ||
PORT=12345 | PORT=12345 | ||
Line 104: | Line 97: | ||
# Sender | # Sender | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" | homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ | ||
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ | |||
nvarguscamerasrc | nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ | ||
nvarguscamerasrc | nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ | ||
nvarguscamerasrc | stitcher. ! nvvidconv ! nvv4l2h264enc ! rtph264pay config-interval=10 ! queue ! udpsink host=$HOST port=$PORT | ||
stitcher. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 120: | Line 112: | ||
=== Saving a stitch from three MP4 videos === | === Saving a stitch from three MP4 videos === | ||
==== Example pipeline ==== | |||
<syntaxhighlight lang=bash> | |||
INPUT_0=video_0.mp4 | |||
INPUT_1=video_1.mp4 | |||
INPUT_2=video_2.mp4 | |||
OUTPUT=/tmp/stitching_result.mp4 | |||
gst-launch-1.0 -e cudastitcher name=stitcher \ | |||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | |||
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ | |||
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ | |||
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ | |||
stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTPUT | |||
</syntaxhighlight> | |||
==== Example pipeline for x86 ==== | |||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
INPUT_0=video_0.mp4 | INPUT_0=video_0.mp4 | ||
Line 131: | Line 142: | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | ||
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ | |||
filesrc location=$INPUT_0 ! | filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ | ||
filesrc location=$INPUT_1 ! | filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ | ||
filesrc location=$INPUT_2 ! | stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT | ||
stitcher. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
== Stitching images == | |||
=== Saving a stitch from two JPEG images === | |||
==== Example pipeline ==== | |||
<syntaxhighlight lang=bash> | |||
INPUT_0=image_0.jpeg | |||
INPUT_1=image_1.jpeg | |||
OUTPUT=/tmp/stitching_result.jpeg | |||
gst-launch-1.0 -e cudastitcher name=stitcher \ | |||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | |||
filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_0 \ | |||
filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_1 \ | |||
stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT | |||
</syntaxhighlight> | |||
==== Example pipeline for x86 ==== | |||
<syntaxhighlight lang=bash> | |||
INPUT_0=image_0.jpeg | |||
INPUT_1=image_1.jpeg | |||
OUTPUT=/tmp/stitching_result.jpeg | |||
gst-launch-1.0 -e cudastitcher name=stitcher \ | |||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | |||
filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_0 \ | |||
filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_1 \ | |||
stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT | |||
</syntaxhighlight> | |||
== Specifying a format == | == Specifying a format == | ||
Line 144: | Line 190: | ||
=== Generating a MP4 stitch from 3 GRAY8 cameras === | === Generating a MP4 stitch from 3 GRAY8 cameras === | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
OUTVIDEO=/tmp/stitching_result.mp4 | OUTVIDEO=/tmp/stitching_result.mp4 | ||
gst-launch-1.0 -e cudastitcher name=stitcher \ | gst-launch-1.0 -e cudastitcher name=stitcher \ | ||
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ | ||
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \ | |||
nvarguscamerasrc | nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \ | ||
nvarguscamerasrc | nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_2 \ | ||
nvarguscamerasrc | stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO | ||
stitcher. | |||
</syntaxhighlight> | </syntaxhighlight> | ||
== Using distorted inputs == | |||
Lens distortion correction can be applied to the stitcher with the undistort element. | |||
The [[CUDA_Accelerated_GStreamer_Camera_Undistort/Examples | Undistort examples]] wiki shows some basic usage examples. Visit [[CUDA_Accelerated_GStreamer_Camera_Undistort/Getting_Started/Getting_the_code | Getting Started-Getting the code]] to learn more about the element and how to calibrate it. | |||
== 360 video stitching == | |||
360 video stitching can be applied using the RidgeRun Projector plug-in by applying projections to the cameras like equirectangular projections. | |||
The [[RidgeRun_Image_Projector/RidgeRun_Image_Projector/Examples/GStreamer_pipelines | Projector examples]] wiki shows some basic usage examples. Visit the [[RidgeRun_Image_Projector | RidgeRun Image Projector]] wiki to learn more about the plug-in. | |||
<noinclude> | <noinclude> | ||
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Examples/ | {{Image_Stitching_for_NVIDIA_Jetson/Foot|Examples/Using Gstd|Performance}} | ||
</noinclude> | </noinclude> |
Latest revision as of 19:24, 13 February 2024
Image Stitching for NVIDIA®Jetson™ |
---|
Before Starting |
Image Stitching Basics |
Overview |
Getting Started |
User Guide |
Resources |
Examples |
Spherical Video |
Performance |
Contact Us |
|
This page showcases basic usage examples of the cudastitcher element, most of these pipelines can be automatically generated by this Pipeline generator Tool.
The homography list is stored in the homographies.json
file and contains N-1 homographies for N images, for more information on how to set these values, visit Controlling the Stitcher Wiki page.
For all of the examples below assume that there are three inputs and the homographies file looks like this:
{ "homographies":[ { "images":{ "target":0, "reference":1 }, "matrix":{ "h00": 1, "h01": 0, "h02": -510, "h10": 0, "h11": 1, "h12": 0, "h20": 0, "h21": 0, "h22": 1 } }, { "images":{ "target":2, "reference":1 }, "matrix":{ "h00": 1, "h01": 0, "h02": 510, "h10": 0, "h11": 1, "h12": 0, "h20": 0, "h21": 0, "h22": 1 } } ] }
The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them.
The perf element is used in some of the examples, it can be downloaded from RidgeRun Git repository; otherwise, the element can be removed from the pipeline without any issues. Also, in case of encountering performance issues, consider executing the /usr/bin/jetson_clocks binary.
Stitching from cameras
Saving a stitch to MP4
OUTVIDEO=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
Displaying a stitch
gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! xvimagesink
Dumping output to fakesink
This option is particularly useful for debugging
gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ stitcher. ! fakesink
Streaming via UDP+RTP
Set the HOST variable to the Receiver's IP
HOST=127.0.0.1 PORT=12345 # Sender gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ stitcher. ! nvvidconv ! nvv4l2h264enc ! rtph264pay config-interval=10 ! queue ! udpsink host=$HOST port=$PORT
# Receiver gst-launch-1.0 udpsrc port=$PORT ! 'application/x-rtp, media=(string)video, encoding-name=(string)H264' ! queue ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink
Stitching videos
Saving a stitch from three MP4 videos
Example pipeline
INPUT_0=video_0.mp4 INPUT_1=video_1.mp4 INPUT_2=video_2.mp4 OUTPUT=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTPUT
Example pipeline for x86
INPUT_0=video_0.mp4 INPUT_1=video_1.mp4 INPUT_2=video_2.mp4 OUTPUT=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT
Stitching images
Saving a stitch from two JPEG images
Example pipeline
INPUT_0=image_0.jpeg INPUT_1=image_1.jpeg OUTPUT=/tmp/stitching_result.jpeg gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_1 \ stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT
Example pipeline for x86
INPUT_0=image_0.jpeg INPUT_1=image_1.jpeg OUTPUT=/tmp/stitching_result.jpeg gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_1 \ stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT
Specifying a format
Generating a MP4 stitch from 3 GRAY8 cameras
OUTVIDEO=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
Using distorted inputs
Lens distortion correction can be applied to the stitcher with the undistort element.
The Undistort examples wiki shows some basic usage examples. Visit Getting Started-Getting the code to learn more about the element and how to calibrate it.
360 video stitching
360 video stitching can be applied using the RidgeRun Projector plug-in by applying projections to the cameras like equirectangular projections.
The Projector examples wiki shows some basic usage examples. Visit the RidgeRun Image Projector wiki to learn more about the plug-in.