Video Stabilization for Embedded Systems - Video Stabilization Basics

From RidgeRun Developer Wiki



Previous: Getting Started/Building Video Stabilization Index Next: Getting Started





RidgeRun's Video Stabilization for Embedded Systems is a software library that aims to provide efficient video stabilization solutions for resource constrained systems. The library uses different hardware units available in the platform to ensure real-time performance on a variety of small platforms.

Stages

Motion estimation

This stage leverages the motion estimation in H264 encoding to take advantage of hardware accelerated H264 encoding found in many SoCs.

Motion compensation

This stage applies a low pass filter with zero DC delay to reduce as much as possible the latency added by traditional low pass filters.

Image warping

This stage applies the final transformation to the image by taking advantage of hardware acceleration provided by OpenGL and running on the GPU.

Real-time in embedded systems

From these, the first and the last stages are known to be very resource intensive. For the motion estimation, we make use of the HW accelerated H.264 encoder and extract the motion vectors. Most modern platforms, even though small, are capable of encoding at 30fps. These motion vectors are aggregated to estimate the overall camera movement.

The camera movement is then smoothed out to eliminate undesired perturbations. This motion correction is applied to the original image by using OpenGL, which again may operate in real-time in the mentioned platforms.


Previous: Getting Started/Building Video Stabilization Index Next: Getting Started