Library Integration for IMU - Preparing the Sensor

From RidgeRun Developer Wiki

Follow Us On Twitter LinkedIn Email Share this page







Preparing the Sensor

If you intend to integrate the RidgeRun Video Stabilization Library into your application, you may be interested in wrapping the sensor into our interface. So that the API will be uniform, including the timestamps. In this case, you can follow Adding New Sensors for reference.

Determining the Orientation and Timestamp Offset

The orientation and timestamp offset are explained in detail here.

Wrapping the Measurements

In case you want to add RVS into your code base

In case you don't want to integrate your sensor into the library, you can wrap the measurements into the SensorPayload structure.

Assuming that the sensor provides both accelerometer and gyroscope data, we can fill these data into the SensorPayload. First, we have to construct the Point3d, composed by four members: x, y, z, and timestamp:

  • The accelerometer data is given in
  • The gyroscope data is given in
  • The timestamps are in microseconds.


Info
Important: The timestamp must have the same time reference as in the video. You can see Preparing_the_Video/Relevance_of_Timestamps for reference.


The following snippet illustrates how to construct the point 3D for both:

// Assume that the data comes is already in the following variables:
float ax, ay, az;   // Accelerometer
float gx, gy, gz;   // Gyroscope
uint64_t timestamp; // Same for both (it must match the capture time)

Point3d accel;
accel.x = ax;
accel.y = ay;
accel.z = az;
accel.timestamp = timestamp;

Point3d gyro;
gyro.x = gx;
gyro.y = gy;
gyro.z = gz;
gyro.timestamp = timestamp;

After constructing the Point3d, it is time to construct the SensorPayload.

// Assume that the data come from the snippet before
// Point3d accel;
// Point3d gyro;

SensorPayload reading;
reading.accel = accel;
reading.gyro = gyro;

Reading from an Existing ISensor adapter

If the sensor is already wrapped as an ISensor, reading the sensor is usually done in another thread different from the video capture. The process of reading the sensor is the following:

// Creating a sensor and starting it (RB5 IMU for this example)
auto imu_client = rvs::ISensor::Build(rvs::Sensors::kRb5Imu, nullptr);

// Include the frequency, sample rate, sensor ID and orientation array mapping 
std::shared_ptr<rvs::SensorParams> conf = 
    std::make_shared<rvs::SensorParams>(rvs::SensorParams{kFrequency, kSampleRate, kSensorId, {'z', 'y', 'x'}});

imu_client->Start(conf);

// -- In another thread. Read --
std::shared_ptr<rvs::SensorPayload> payload = std::make_shared<rvs::SensorPayload>();
imu_client->Get(payload);

// Now, *payload has the sensor data

// Stop: when finishing the execution
imu_client->Stop();

You can have a look at the available examples in Examples Guidebook.

Informing the Integrator About the Data

The integrator must be aware of the available data through the IntegratorSettings. The following data fills the details:

// Assume that the data come from the snippet before
// SensorPayload reading;

auto settings = std::make_shared<IntegratorSettings>();
settings->enable_gyroscope = true;
settings->enable_accelerometer = true;
settings->enable_complementary = false;

The IntegratorSettings defines if the integrator can use the gyroscope and the accelerometer data. The gyroscope is mandatory to be present for the RidgeRun Video Stabilization Library to work. The accelerometer data is optional and enhances the gyroscope biasing. On the other hand, the complementary allows the complementary-based algorithms to use the accelerometer data for fixing the gyroscope biasing.