Image Stitching for NVIDIA Jetson - Getting Started - Evaluating the Stitcher

From RidgeRun Developer Wiki
Revision as of 16:04, 24 August 2020 by Asolis (talk | contribs) (Created page with "<noinclude> {{Image Stitching for NVIDIA Jetson/Head|previous=Image Stitching for NVIDIA Jetson Basics|next=Getting_Started/Getting the code|keywords=Image Stitching, CUDA, St...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)



Previous: Image Stitching for NVIDIA Jetson Basics Index Next: Getting_Started/Getting the code





Requesting the Evaluation Binary

In order to request an evaluation binary for a specific architecture, please contact us providing the following information:

  • Platform (i.e.: NVIDIA Jetson TX1/TX2, Xavier, Nano.)
  • Jetpack version

Features of the Evaluation

To help you test our Stitcher, RidgeRun can provide an evaluation version of the plug-in.

The following table summarizes the features available in both the professional and evaluation version of the element.

Feature Professional Evaluation
Stitcher Examples Y Y
GstStitcher Element Y Y
Unlimited Processing Time Y N (1)
Source Code Y N
Table 1. Features provided in the evaluation version

(1) The evaluation version will limit the processing to a maximum of 1800 frames.

Install and test the evaluation binaries

First, install GstCUDA (professional or evaluation version) and please follow this guide to install OpenCV with CUDA support https://developer.ridgerun.com/wiki/index.php?title=Install_OpenCV_with_CUDA_support_on_Jetson_Boards

RidgeRun should've provided you with the following compressed tar package/s:

cuda-stitcher-X.Y.Z-P-J-eval.tar.gz

The provided cuda-stitcher eval version tarball must have the following structure:

cuda-stitcher-X.Y.Z-P-J-eval/
├── examples
│   ├── ImageA.png
│   ├── ImageB.png
│   ├── ocvcuda
│   │   └── stitcher_example
│   └── opencv
│       └── stitcher_example
├── scripts
│   └── homography_estimation.py
└── usr
    └── lib
        └── aarch64-linux-gnu
            ├── gstreamer-1.0
            │   └── libgstcudastitcher.so
            ├── librrstitcher-X.Y.Z.so -> librrstitcher-X.Y.Z.so.0
            ├── librrstitcher-X.Y.Z.so.0 -> librrstitcher-X.Y.Z.so.X.Y.Z
            ├── librrstitcher-X.Y.Z.so.X.Y.Z
            └── pkgconfig
                └── rrstitcher-X.Y.Z.pc

Install the tar package

Copy the binaries into the standard GStreamer plug-in search path:

sudo cp -r ${PATH_TO_EVALUATION_BINARY}/usr /

where PATH_TO_EVALUATION_BINARY is set the location in your file system where you have stored the binary provided by RidgeRun (i.e: cuda-stitcher-X.Y.Z-P-J-eval).

gst-inspect-1.0 cudastitcher

You should see the inspect output for the evaluation binary.

How to use it

Please, follow these guides to get the homography matrices and to set the parameters based on your needs:

  1. https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation
  2. https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Controlling_the_Stitcher

After collecting the parameters, you can run a GStreamer pipeline like the one below:

LC_HOMOGRAPHY="{\
  \"h00\": 7.3851e-01, \"h01\": 1.0431e-01, \"h02\": 1.4347e+03, \
  \"h10\":-1.0795e-01, \"h11\": 9.8914e-01, \"h12\":-9.3916e+00, \
  \"h20\":-2.3449e-04, \"h21\": 3.3206e-05, \"h22\": 1.0000e+00}"

RC_HOMOGRAPHY="{\
  \"h00\": 7.3851e-01, \"h01\": 1.0431e-01, \"h02\": 1.4347e+03, \
  \"h10\":-1.0795e-01, \"h11\": 9.8914e-01, \"h12\":-9.3916e+00, \
  \"h20\":-2.3449e-04, \"h21\": 3.3206e-05, \"h22\": 1.0000e+00}"\

BORDER_WIDTH=10

gst-launch-1.0 -e cudastitcher name=stitcher \
  left-center-homography="$LC_HOMOGRAPHY" \
  right-center-homography="$RC_HOMOGRAPHY" \
  border-width=$BORDER_WIDTH \
  nvarguscamerasrc maxperf=true sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
  nvarguscamerasrc maxperf=true sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
  nvarguscamerasrc maxperf=true sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
  stitcher. ! perf print-arm-load=true ! queue ! nvvidconv ! nvoverlaysink

You can install the perf element with this repository https://github.com/RidgeRun/gst-perf. Also, you can remove it from the pipeline without a problem.

Please refer to the links below for more pipelines.

https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/Examples

Other examples

Also, you can test the stitcher with the following commands, using OpenCV or OpenCV-CUDA as backend

cd ${PATH_TO_EVALUATION_BINARY}
# OpenCV-CUDA
./examples/ocvuda/stitcher_example -f examples/ImageA.png -s examples/ImageB.png
# OpenCV
./examples/opencv/stitcher_example -f examples/ImageA.png -s examples/ImageB.png

Troubleshooting

The first level of debugging to troubleshoot a failing evaluation binary is to inspect GStreamer debug output.

GST_DEBUG=2 gst-launch-1.0

If the output doesn't help you figure out the problem, please contact support@ridgerun.com with the output of the GStreamer debug and any additional information you consider useful.

RidgeRun also offers professional support hours that you can invest in any embedded Linux related task you want to assign to RidgeRun, such as hardware bring up tasks, application development, GStreamer pipeline fine-tuning, drivers, etc...


Previous: Image Stitching for NVIDIA Jetson Basics Index Next: Getting_Started/Getting the code