RidgeRun Video Stabilization Library/Library Integration for IMU: Difference between revisions

(Created page with "<noinclude> {{RidgeRun Video Stabilization Library/Head|previous=API Reference/Examples Guidebook|next=Library Integration for IMU/Preparing the Video|metakeywords=imu,stabilizer,rb5,jetson,nvidia,xilinx,amd,qualcomm}} </noinclude> {{DISPLAYTITLE:Library Integration for IMU|noerror}} <noinclude> {{RidgeRun Video Stabilization Library/Foot|API Reference/Examples Guidebook|Library Integration for IMU/Preparing the Video}} </noinclude>")
 
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 5: Line 5:
{{DISPLAYTITLE:Library Integration for IMU|noerror}}
{{DISPLAYTITLE:Library Integration for IMU|noerror}}


This section will cover constructing a video stabilizer that uses IMU data for movement estimation.


# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Preparing the Video|'''Preparing the Video''']]: shows details of how to construct the '''Image''' objects regarding the format and timestamps. Each video frame must have a proper format and timestamp with the same reference as the IMU measurements and should be expressed in microseconds.
# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Preparing the IMU Measurements|'''Preparing the IMU Measurements''']]: shows details of how to construct the SensorPayload objects regarding the format and timestamps. Each video frame must have a proper format and timestamp with the same reference as the IMU measurements and should be expressed in microseconds. The angles must be in radians/second.
# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Measurement Integration|'''Measurement Integration''']]: the vector of measurements (SensorPayload) must be integrated and transformed into softened quaternions using one of the Integration algorithms.
# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Interpolation|'''Interpolation''']]: the integrated quaternions must be interpolated and aligned to the timestamp of each frame such that the angle can match the actual position of the camera when the image was captured. In case the image does not have a timestamp, the timestamps must be aligned accordingly before this step.
# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Computing the Stabilization|'''Computing the Stabilization''']]: once the rotations are known for the image of interest, the stabilization estimates the distortion the video has suffered throughout the entire movement history. For that reason, we need to apply one stabilization algorithm to determine the distortion of the image.
# [[RidgeRun Video Stabilization Library/Library Integration for IMU/Video Undistortion|'''Video Undistortion''']]: with the distortion computed, we can undistort the image using any available execution backends.
We also provide an [[RidgeRun Video Stabilization Library/Library Integration for IMU/Example Application|'''Example Application''']] detailing this process.


<noinclude>
<noinclude>
{{RidgeRun Video Stabilization Library/Foot|API Reference/Examples Guidebook|Library Integration for IMU/Preparing the Video}}
{{RidgeRun Video Stabilization Library/Foot|previous=API Reference/Examples Guidebook|next=Library Integration for IMU/Preparing the Video}}
</noinclude>
</noinclude>

Latest revision as of 13:00, 19 August 2024








This section will cover constructing a video stabilizer that uses IMU data for movement estimation.

  1. Preparing the Video: shows details of how to construct the Image objects regarding the format and timestamps. Each video frame must have a proper format and timestamp with the same reference as the IMU measurements and should be expressed in microseconds.
  2. Preparing the IMU Measurements: shows details of how to construct the SensorPayload objects regarding the format and timestamps. Each video frame must have a proper format and timestamp with the same reference as the IMU measurements and should be expressed in microseconds. The angles must be in radians/second.
  3. Measurement Integration: the vector of measurements (SensorPayload) must be integrated and transformed into softened quaternions using one of the Integration algorithms.
  4. Interpolation: the integrated quaternions must be interpolated and aligned to the timestamp of each frame such that the angle can match the actual position of the camera when the image was captured. In case the image does not have a timestamp, the timestamps must be aligned accordingly before this step.
  5. Computing the Stabilization: once the rotations are known for the image of interest, the stabilization estimates the distortion the video has suffered throughout the entire movement history. For that reason, we need to apply one stabilization algorithm to determine the distortion of the image.
  6. Video Undistortion: with the distortion computed, we can undistort the image using any available execution backends.

We also provide an Example Application detailing this process.