Stitcher element few more example pipelines
Image Stitching for NVIDIA®Jetson™ |
---|
Before Starting |
Image Stitching Basics |
Overview |
Getting Started |
User Guide |
Resources |
Examples |
Spherical Video |
Performance |
Contact Us |
|
This page showcases basic usage examples of the cudastitcher element, most of these pipelines can be automatically generated by this Pipeline generator Tool.
The homography list is stored in the homographies.json
file and contains N-1 homographies for N images, for more information on how to set these values, visit Controlling the Stitcher Wiki page.
For all of the examples below assume that there are three inputs and the homographies file looks like this:
{ "homographies":[ { "images":{ "target":0, "reference":1 }, "matrix":{ "h00": 1, "h01": 0, "h02": -510, "h10": 0, "h11": 1, "h12": 0, "h20": 0, "h21": 0, "h22": 1 } }, { "images":{ "target":2, "reference":1 }, "matrix":{ "h00": 1, "h01": 0, "h02": 510, "h10": 0, "h11": 1, "h12": 0, "h20": 0, "h21": 0, "h22": 1 } } ] }
The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them.
The perf element is used in some of the examples, it can be downloaded from RidgeRun Git repository; otherwise, the element can be removed from the pipeline without any issues. Also, in case of encountering performance issues, consider executing the /usr/bin/jetson_clocks binary.
Stitching from cameras
Saving a stitch to MP4
OUTVIDEO=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
Displaying a stitch
gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! xvimagesink
Dumping output to fakesink
This option is particularly useful for debugging
gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ stitcher. ! fakesink
Streaming via UDP+RTP
Set the HOST variable to the Receiver's IP
HOST=127.0.0.1 PORT=12345 # Sender gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \ stitcher. ! nvvidconv ! nvv4l2h264enc ! rtph264pay config-interval=10 ! queue ! udpsink host=$HOST port=$PORT
# Receiver gst-launch-1.0 udpsrc port=$PORT ! 'application/x-rtp, media=(string)video, encoding-name=(string)H264' ! queue ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink
Stitching videos
Saving a stitch from three MP4 videos
Example pipeline
INPUT_0=video_0.mp4 INPUT_1=video_1.mp4 INPUT_2=video_2.mp4 OUTPUT=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTPUT
Example pipeline for x86
INPUT_0=video_0.mp4 INPUT_1=video_1.mp4 INPUT_2=video_2.mp4 OUTPUT=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \ filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT
Stitching images
Saving a stitch from two JPEG images
Example pipeline
INPUT_0=image_0.jpeg INPUT_1=image_1.jpeg OUTPUT=/tmp/stitching_result.jpeg gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_1 \ stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT
Example pipeline for x86
INPUT_0=image_0.jpeg INPUT_1=image_1.jpeg OUTPUT=/tmp/stitching_result.jpeg gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_0 \ filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_1 \ stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT
Specifying a format
Generating a MP4 stitch from 3 GRAY8 cameras
OUTVIDEO=/tmp/stitching_result.mp4 gst-launch-1.0 -e cudastitcher name=stitcher \ homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \ nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \ nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \ nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_2 \ stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
Using distorted inputs
Lens distortion correction can be applied to the stitcher with the undistort element.
The Undistort examples wiki shows some basic usage examples. Visit Getting Started-Getting the code to learn more about the element and how to calibrate it.
360 video stitching
360 video stitching can be applied using the RidgeRun Projector plug-in by applying projections to the cameras like equirectangular projections.
The Projector examples wiki shows some basic usage examples. Visit the RidgeRun Image Projector wiki to learn more about the plug-in.