Image Stitching for NVIDIA Jetson/User Guide/Camera Calibration: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
(Redirect to undistort wiki)
Line 3: Line 3:
</noinclude>
</noinclude>


The following page will introduce a way to calibrate your camera, to get the parameters used in the homography estimation. This method consists of a Python script that will guide you through the calibration process without going into technical details.
Depending on the camera and lenses used for image stitching, it may require lens distortion correction.


== Dependencies ==
In order to achieve that the '''CUDA undistort''' element can be used, more information on the topic can be found on its own wiki page: https://developer.ridgerun.com/wiki/index.php?title=CUDA_Accelerated_GStreamer_Camera_Undistort
If you haven't, complete the dependencies section of [https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/Getting_Started/Building_Image_Stitching_for_NVIDIA_Jetson this guide].


Print a calibration pattern, such as the [https://github.com/opencv/opencv/blob/master/doc/pattern.png OpenCV Calibration Pattern], and stick it on a flat surface.
Follow the Undistort camera calibration [https://developer.ridgerun.com/wiki/index.php?title=CUDA_Accelerated_GStreamer_Camera_Undistort/User_Guide/Camera_Calibration guide] in order to obtain the camera matrix and distortion parameters needed for your specific lens and use case.
 
== Setting up the Parameters ==
Go inside the calibration directory of the '''cuda-undistort''' repository
<source lang=bash>
cd calibration
</source>
 
Open the '''settings.json''' file, where you will find the following lines
<source lang=json>
"cameraWidth": 1280,
"cameraHeight": 720,
"chessboardInnerCornersWidth": 6,
"chessboardInnerCornersHeight": 9,
"camIndex": 0,
</source>
 
You need to adjust those parameters so that they match your camera specification and your printed calibration pattern.
*'''cameraWidth'''
:: Width of the pictures the camera can get.
*'''cameraHeight''' 
:: Height of the pictures the camera can get.
 
The black (and the white) diagonally aligned squares of the calibration pattern form what are called the ''chessboard corners'', so the following parameters are related to those corners.
*'''chessboardInnerCornersWidth'''
:: Number of corners that are counted horizontally.
*'''chessboardInnerCornersHeight'''
:: Number of corners that are counted vertically.
 
When a camera is connected to a device, an index is used to refer to the camera. This index is the number that follows the ''video'' when listing the cameras with the following command
<source lang=bash>
ls /dev/video*
</source>
 
*'''cameraIndex'''
:: This parameter is the index of the camera.
 
== Calibrating ==
=== Calibration Options ===
The calibration script has the following options
 
{| class="wikitable" style="margin: auto"
|-
! Argument !! Description
|-
| '''-c''', --calibrate || To execute the calibration process.
|-
| '''-p''', --images_path [IMAGES_PATH] || Sets the path to store the calibration images. Default: ''imgs/calibration''
|-
| '''-r''', --remove || Removes the images from the IMAGES_PATH in case it's not empty.
|-
| '''-s''', --settings_path [SETTINGS_PATH] || Sets the path to the settings file. Default: ''settings.json''
|-
| '''-t''', --test || Runs a test routine for the user to see the calibration results.
|-
| '''-v''', --v4l2 || Uses the v4l2 plugin to capture images from the camera. By default, the nvidia plugins are used.
|}
 
=== Running the Calibrator ===
Run the calibration tool as follows
 
<source lang=bash>
python3 calibrationTool.py [OPTIONS]
</source>
 
where [OPTIONS] are the chosen options of the table above. Usually, you will want to run the calibration process, overwrite possible existing images, and run a test at the end to check the results. For this use case, run the calibrator as follows:
 
<source lang=bash>
python3 calibrationTool.py -c -r -t
# Add the -v flag if you are using a camera that is accessed through v4l2.
</source>
 
A window will appear, showing the captured images from the camera. At the beginning, the video will be slow, as the calibration pattern is not found.
 
[[File:Single Camera Calibration Fail Examples Animation.gif|700px|center|frameless]]
 
Show the calibration pattern to the camera, and you will see three things:
* '''A colored border'''
:: Red: the calibration pattern is not being detected.
:: Green: a calibration pattern was detected.
* '''Saved images'''
:: The number of images that have been saved (which are captured every 2 seconds) when a calibration pattern is detected.
* '''Press (q) to finish'''
:: You can press the (q) key to end the capture routine when you have enough pictures saved (between 40-50 is recommended).
 
Also, when the pattern is detected, you will see circles over the outermost corners. An image will be saved every 2 seconds, if the pattern is detected. Press '''(q)''' to finish.
 
[[File:Single Camera Calibration Success Examples Animation.gif|700px|center|frameless]]
 
To get a '''good quality calibration''', make sure of the following:
* '''Save 40-50''' images.
* '''Move''' the pattern to fully cover the image area.
* '''Tilt''' the pattern in both horizontal and vertical directions.
 
Now that the calibration images are ready, wait for another window to open. This one will show you the result of applying the computed parameters to the images captured by the camera at that moment, so you can check if the parameters are good enough. This window shows the undistorted images for both ''fisheye'' and ''Brown-Conrady'' models. Press (q) to exit. If you don't like the results, try to run the calibration tool again, following the guidelines mentioned before.
 
[[File:Single Camera Calibration Test.png|1100px|center]]
 
Now, the results are saved in JSON format in the '''settings.json''' file, by default. This could change if you used the '''-s''' flag while running the tool. You can copy the needed parameters if you're planning to use the homography estimation guide.
 
Also, the '''cuda-undistort''' element will need the computed camera parameters, but as environment variables. For this reason, those parameters will be printed in your terminal, so that you can copy them for your pipelines. You will see an output like the one below, so choose the ''CAMERA_MATRIX'' and ''DISTORTION_PARAMETERS'' for your desired model, whether it is ''fisheye'' or ''Brown-Conrady''
 
<source lang=bash>
  =============================================
|  The following variables can be used as the |
| parameters of the cuda-undistort element.  |
| Use the parameters of your prefered model.  |
  =============================================
# =======
# FISHEYE
# =======
CAMERA_MATRIX="{\"fx\":9.5211633874478218e+02, \"fy\":9.4946222068253201e+02, \"cx\":6.8041416457132573e+02, \"cy\":3.1446117133659988e+02}"
DISTORTION_PARAMETERS="{\"k1\":3.8939572818197948e-01, \"k2\":-5.5685725182648649e-01, \"k3\":2.3785352925072494e+00, \"k4\":-1.2037220289124213e+00}"
# =============
# BROWN_CONRADY
# =============
CAMERA_MATRIX="{\"fx\":9.5211633874478218e+02, \"fy\":9.4946222068253201e+02, \"cx\":6.8041416457132573e+02, \"cy\":3.1446117133659988e+02}"
DISTORTION_PARAMETERS="{\"k1\":2.1107496349546324e+01, \"k2\":-2.4383787227376064e+02, \"p1\":-2.4875466379420917e-03, \"p2\":2.2798038164244493e-03, \"k3\":5.9419414118252871e+02, \"k4\":2.1085235633925034e+01, \"k5\":-2.4360553937983042e+02, \"k6\":5.9359831515760391e+02}"
</source>
 
If you need to generate the output above from the saved '''settings.json''' file, just run the tool without arguments:
 
<source lang=bash>
python3 calibrationTool.py
</source>
 
== Using the Undistort Element ==
Now that we have the camera matrix and the distortion parameters, just copy them as follows to build a pipeline.
 
=== Fisheye ===
<source lang=bash>
CAMERA_MATRIX="{\"fx\":9.5211633874478218e+02, \"fy\":9.4946222068253201e+02, \"cx\":6.8041416457132573e+02, \"cy\":3.1446117133659988e+02}"
DISTORTION_PARAMETERS="{\"k1\":3.8939572818197948e-01, \"k2\":-5.5685725182648649e-01, \"k3\":2.3785352925072494e+00, \"k4\":-1.2037220289124213e+00}"
 
gst-launch-1.0 nvarguscamerasrc \
  ! nvvidconv \
  ! cudaundistort distortion-model=fisheye camera-matrix="$CAMERA_MATRIX" distortion-parameters="$DISTORTION_PARAMETERS" \
  ! nvvidconv \
  ! nvoverlaysink
</source>
 
=== Brown-Conrady ===
<source lang=bash>
CAMERA_MATRIX="{\"fx\":9.5211633874478218e+02, \"fy\":9.4946222068253201e+02, \"cx\":6.8041416457132573e+02, \"cy\":3.1446117133659988e+02}"
DISTORTION_PARAMETERS="{\"k1\":2.1107496349546324e+01, \"k2\":-2.4383787227376064e+02, \"p1\":-2.4875466379420917e-03, \"p2\":2.2798038164244493e-03, \"k3\":5.9419414118252871e+02, \"k4\":2.1085235633925034e+01, \"k5\":-2.4360553937983042e+02, \"k6\":5.9359831515760391e+02}"
 
gst-launch-1.0 nvarguscamerasrc \
  ! nvvidconv \
  ! cudaundistort distortion-model=brown-conrady camera-matrix="$CAMERA_MATRIX" distortion-parameters="$DISTORTION_PARAMETERS" \
  ! nvvidconv \
  ! nvoverlaysink
</source>


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide|User Guide/Homography estimation}}
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide|User Guide/Homography estimation}}
</noinclude>
</noinclude>

Revision as of 17:40, 8 March 2021