Image Stitching for NVIDIA Jetson - Getting Started - Evaluating the Stitcher
Image Stitching for NVIDIA®Jetson™ |
---|
Before Starting |
Image Stitching Basics |
Overview |
Getting Started |
User Guide |
Resources |
Examples |
Spherical Video |
Performance |
Contact Us |
Requesting the Evaluation Binary
In order to request an evaluation binary for a specific architecture, please contact us providing the following information:
- Platform (i.e.: NVIDIA Jetson TX1/TX2, Xavier, Nano.)
- Jetpack version
- The number of cameras to be used with the stitcher
- The kind of lenses to be used and weather or not you need to apply distortion correction
- Input resolutions and framerates
- Expected output resolution and framerate
- Latency requirements
Features of the Evaluation
To help you test our Stitcher, RidgeRun can provide an evaluation version of the plug-in.
The following table summarizes the features available in both the professional and evaluation version of the element.
Feature | Professional | Evaluation |
---|---|---|
Stitcher Examples | Y | Y |
GstStitcher Element | Y | Y |
Unlimited Processing Time | Y | N (1) |
Source Code | Y | N |
(1) The evaluation version will limit the processing to a maximum of 1800 frames.
Install and test the evaluation binaries
First, install GstCUDA (professional or evaluation version), then please install OpenCV with CUDA support from Compiling OpenCV for Image Stitching. The stitcher requires a specific version of RisgeRun's OpenCV Fork so make sure to follow the previous guide even if you already have OpenCV with CUDA.
RidgeRun should've provided you with the following compressed tar package/s:
cuda-stitcher-X.Y.Z-P-J-eval.tar.gz
The provided cuda-stitcher eval version tarball must have the following structure:
cuda-stitcher-X.Y.Z-P-J-eval/ ├── examples │ ├── desert_center.jpg │ ├── desert_left.jpg │ ├── desert_right.jpg │ ├── ImageA.png │ ├── ImageB.png │ └── ocvcuda │ ├── stitcher_example | └── three_image_stitcher ├── scripts │ └── homography_estimation.py └── usr └── lib └── aarch64-linux-gnu ├── gstreamer-1.0 │ └── libgstcudastitcher.so ├── librrstitcher-X.Y.Z.so -> librrstitcher-X.Y.Z.so.0 ├── librrstitcher-X.Y.Z.so.0 -> librrstitcher-X.Y.Z.so.X.Y.Z ├── librrstitcher-X.Y.Z.so.X.Y.Z └── pkgconfig └── rrstitcher-X.Y.Z.pc
Install the tar package
Copy the binaries into the standard GStreamer plug-in search path:
sudo cp -r ${PATH_TO_EVALUATION_BINARY}/usr /
where PATH_TO_EVALUATION_BINARY is set the location in your file system where you have stored the binary provided by RidgeRun (i.e: cuda-stitcher-X.Y.Z-P-J-eval).
gst-inspect-1.0 cudastitcher
You should see the inspect output for the evaluation binary.
How to use it
Please, follow these guides to get the homography matrices and to set the parameters based on your needs:
- https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation
- https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Controlling_the_Stitcher
After collecting the parameters, you can run a GStreamer pipeline like the one below:
LC_HOMOGRAPHY="{\ \"h00\": 7.3851e-01, \"h01\": 1.0431e-01, \"h02\": 1.4347e+03, \ \"h10\":-1.0795e-01, \"h11\": 9.8914e-01, \"h12\":-9.3916e+00, \ \"h20\":-2.3449e-04, \"h21\": 3.3206e-05, \"h22\": 1.0000e+00}" RC_HOMOGRAPHY="{\ \"h00\": 7.3851e-01, \"h01\": 1.0431e-01, \"h02\": 1.4347e+03, \ \"h10\":-1.0795e-01, \"h11\": 9.8914e-01, \"h12\":-9.3916e+00, \ \"h20\":-2.3449e-04, \"h21\": 3.3206e-05, \"h22\": 1.0000e+00}"\ BORDER_WIDTH=10 gst-launch-1.0 -e cudastitcher name=stitcher \ left-center-homography="$LC_HOMOGRAPHY" \ right-center-homography="$RC_HOMOGRAPHY" \ border-width=$BORDER_WIDTH \ nvarguscamerasrc maxperf=true sensor-id=0 ! nvvidconv ! stitcher.sink_0 \ nvarguscamerasrc maxperf=true sensor-id=1 ! nvvidconv ! stitcher.sink_1 \ nvarguscamerasrc maxperf=true sensor-id=2 ! nvvidconv ! stitcher.sink_2 \ stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink
You can install the perf element with this repository https://github.com/RidgeRun/gst-perf. Also, you can remove it from the pipeline without a problem.
Please refer to the links below for more pipelines.
https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/Examples
Other examples
Also, you can test the stitcher with the following commands, using two and three images as sources respectively.
cd ${PATH_TO_EVALUATION_BINARY} # OpenCV-CUDA ./examples/ocvcuda/stitcher_example -f examples/ImageA.png -s examples/ImageB.png ./examples/ocvcuda/three_image_stitcher -l examples/desert_left.jpg -c examples/desert_center.jpg -r examples/desert_right.jpg
Troubleshooting
The first level of debugging to troubleshoot a failing evaluation binary is to inspect GStreamer debug output.
GST_DEBUG=2 gst-launch-1.0
If the output doesn't help you figure out the problem, please contact support@ridgerun.com with the output of the GStreamer debug and any additional information you consider useful.
RidgeRun also offers professional support hours that you can invest in any embedded Linux related task you want to assign to RidgeRun, such as hardware bring up tasks, application development, GStreamer pipeline fine-tuning, drivers, etc...