Library Integration for IMU - Video Undistortion

From RidgeRun Developer Wiki

Follow Us On Twitter LinkedIn Email Share this page







Introduction

The previous step determined the distortion quaternion which represents the image's rotation that causes the video instability. In this case, we assume that all the movement can be expressed in terms of rotations and fixed by re-orientating the image in a spherical plane. For this, it is crucial to know how the camera behaves in the real-world coordinates for the proper re-orientation. This section will cover the process of undistorting the image using the RidgeRun Video Stabilization Library.

Using the Library to Undistort

The RidgeRun Video Stabilization Library performs the undistortion using an accelerated execution backend. It receives two images (one input, another the output), the distortion quaternion from the previous step and the field-of-view scale, where a value greater than 1 means that the resulting image will be zoomed out.

Overall, the technique behind the undistortion is the use of the fish-eye model with fisheye lens distortion, where the camera matrix is given by:

where and are the focal distances in the x and y axis, and and are half of the input width and height, respectively. For better stabilisation results, it is highly recommended to calibrate the camera to get the intrinsic camera matrix. The following section will dig into this topic in detail.

After that, we need to scale the camera matrix that will be applied in order to scale the image. The new camera matrix is given by:

where and are and values multiplied by the inverse of the FOV scale, and and are the halves of the output width and height. This matrix is computed automatically and no intervention is needed by the user.

Apart from the intrinsic camera matrix and the new camera matrix, it is possible to add the fish-eye distortion parameters (often known as k1, k2, k3 and k4). To get the distortion parameters, camera calibration is compulsory.

The map creation involves those two camera matrices and a rotation extracted from the quaternion, as it is in the initUndistortRectifyMap from OpenCV.

Calibration Process

The calibration process to obtain the matrices required for undistortion is explained in Calibration Process/Camera Calibration.

Create the Runtime Settings

In case you want to integrate RVS into your code base

The first step is to create the runtime settings. This process allows the backend configuration to use specific details of an application, such as work sizes, queues, platforms or contexts in OpenCL, CUDA Devices, and Streams in CUDA. If the runtime settings are not provided, the backend will create its runtime settings using the defaults, the first available device and default work sizes and create its working queue.

For the OpenCV (or CPU) execution, there is no need to use the runtime settings. However, for OpenCL, you can adjust one of the following:

  • Local Size: Determines the size of the workgroup (default 8x4).
  • Device Index: Specifies which device the backend must take from the platform (used in case context and queues are not defined).
  • Platform Index: Specifies which platform the backend must take from the system (used in case context and queues are not defined).
  • Context: Sets an existing context to use (does not use the device nor the platform index).
  • Queue: Sets an existing work queue to use (does not use the device nor the platform index).

The following snippets indicate two examples of how to create the runtime settings for the backend:

// Create the settings
auto settings = std::make_shared<OpenCLRuntimeSettings>();

// Set the devices to create a new context within the backend
settings->platform_index = 0;
settings->device_index = 0;

// Ready to use

Or:

// Create the settings
uint platform_index = 0;
uint device_index = 0;
auto settings = std::make_shared<OpenCLRuntimeSettings>();

// Create the platform
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
cl_context_properties properties[] = 
    {CL_CONTEXT_PLATFORM, (cl_context_properties)(platforms[platform_index])(), 0};

// Create the context
auto context = std::make_shared<cl::Context>(CL_DEVICE_TYPE_GPU, properties);
std::vector<cl::Device> devices = context.getInfo<CL_CONTEXT_DEVICES>();

// Create the queue
auto queue = std::make_shared<cl::CommandQueue>(context, devices[device_index], 0, &err);

// Set the settings
settings->platform_index = platform_index;
settings->device_index = device_index;
settings->context = context;
settings->queue = queue;

Create the Undistort Instance

The undistort must match the runtime settings. At this moment, the available undistort instances are:

  • kFishEyeOpenCV: uses the OpenCV backend.
  • kFishEyeOpenCL: uses the OpenCL backend.
  • kFishEyeCUDA: coming soon.

To create a new undistort instance:

// Assumes that settings are a valid OpenCLRuntimeSettings shared pointer
auto undistort_cl =
      IUndistort::Build(UndistortAlgorithms::kFishEyeOpenCL, settings);

You can leave settings as a nullptr to have a backend with default settings.

Correcting the Image

Once the undistort instance is created, continue computing the matrix this way:

 undistort->SetCameraMatrices(kCamMatrix, kDistCoeffs, kCalSize[0],
                              kCalSize[1]);

After that, you can use the Apply method:

// Assumes that undistort_cl is created as above,
// inimage and outimage are IImage shared pointers
// and rotation is a quaternion proceeding from the
// stabilization algorithm
std::shared_ptr<IImage> inimage, outimage;
Quaternion<double> rotation;
double fov = 1.5;

undistort_cl->Apply(inimage, outimage, rotation, fov);
Note
Important: both inimage and outimage must be valid images with pre-reserved memory.

After invoking the Apply method, the outimage is the resulting image, ready to use.