Synchronizing Multiple Cameras
|
|
|
With the Jetson TX1, you can capture up to 6 video streams from 6 different cameras. For video analytics, it is often important that the camera sensors capture the frames at the same time. The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done.
Hardware Synchronization - I2C Mode
Thanks to Jürgen Stelbrink at Auvidea for much of the following hardware information, which has been paraphrased below.
Ideally, you start by choosing a sensor designed to support perfectly in-sync video capture. One sensor would be put in master mode, generating the critical timing signals, and the other sensors put in slave mode.
If your sensor of choice doesn't support a master / slave configuration, then you have to simulate such an arrangement. One camera is the master and supplies the clock to the other cameras. So the clock oscillator on the slave cameras is removed and the clock (signal and ground) to connected from the master to the slaves. The cameras now have a common in-sync clock source. Next, you need to get the start-of-exposure for each camera synchronized. If the sensor doesn't support a start-of-exposure pin, then the sensor needs to be controlled via registers, typically using the I2C bus. The simplest solution is to have all the sensors connected to the same I2C bus using the same address. This is a violation of the I2C specification, but in practice should work as the SCL clock and SDA data signals are open collector with pull-up resistors. When the same sensor is used for all cameras, the I2C response for all sensors should be the same. You may want hardware support to have a way to also individually address each sensor.
Most important is that all have the same clock. If not, then the cameras may be off by the precision of the crystal oscillator. Typically this is 50ppm. So 1 frame in 20,000 frames at 1080P60.
Hardware Synchronization - PWM Trigger Mode
This mode is exposed by this camera from e-consystems: https://www.e-consystems.com/multiple-csi-cameras-for-nvidia-jetson-tx2.asp and requires a PWM in charge of triggering the capture for all the cameras simultaneously.
Software Synchronization
At the software level, each frame is given a timestamp by the Linux kernel V4L2 subsystem, so tracking each frame and matching frames take at the same time by different cameras is easy. The timestamp associated with each frame is maintained by GStreamer all the way through the pipeline. Either GStreamer can be used to invoice the video analytics or to combined video frames with the same timestamp within a defined window. Another option is the individual video streams can be kept separate in the device and analyzed or combined later in a non-realtime manner.
To maximize video frame synchronization, each sensor needs to initiate start-of-frame capture at the same time. For a software only solution, the start-of-frame capture is typically controlled by an I2C register write. To maximization synchronization, the I2C register write needs to happen as close to the same time for each sensor as possible. A hardware design that allows each CMOS image sensor to be address over I2C individually or addressed all together is a technique that can improve video frame synchronization.
For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.
Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.