Synchronizing Multiple Cameras: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
mNo edit summary
Line 1: Line 1:
<seo title="Synchronizing Multiple Cameras | NVIDIA Tegra X1 Processor" titlemode="replace" keywords="GStreamer, Linux SDK, Linux BSP,  Embedded Linux, Device Drivers, Nvidia, Xilinx, TI, NXP, Freescale, Embedded Linux driver development, Linux Software development, Embedded Linux SDK, Embedded Linux Application development, GStreamer Multimedia Framework."  description="With the NVIDIA Tegra X1 processer, you can capture up to 6 video streams from 6 different cameras. Learn about how synchronizing multiple cameras works now."></seo>
With the NVIDIA Jetson Tegra X1 processor, you can capture up to 6 video streams from 6 different cameras.  For video analytics, it is often important that the camera sensors capture the frames ''at the same time''.  The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done.
With the NVIDIA Jetson Tegra X1 processor, you can capture up to 6 video streams from 6 different cameras.  For video analytics, it is often important that the camera sensors capture the frames ''at the same time''.  The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done.



Revision as of 19:00, 7 December 2017


With the NVIDIA Jetson Tegra X1 processor, you can capture up to 6 video streams from 6 different cameras. For video analytics, it is often important that the camera sensors capture the frames at the same time. The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done.

Hardware Synchronization

Thanks to Jürgen Stelbrink at Auvidea for much of the following hardware information, which has been paraphrased below.

Ideally, you start by choosing a sensor designed to support perfectly in-sync video capture. One sensor would be put in master mode, generating the critical timing signals, and the other sensors put in slave mode.

If your sensor of choice doesn't support a master / slave configuration, then you have to simulate such an arrangement. One camera is the master and supplies the clock to the other cameras. So the clock oscillator on the slave cameras is removed and the clock (signal and ground) to connected from the master to the slaves. The cameras now have a common in-sync clock source. Next, you need to get the start-of-exposure for each camera synchronized. If the sensor doesn't support a start-of-exposure pin, then the sensor needs to be controlled via registers, typically using the I2C bus. The simplest solution is to have all the sensors connected to the same I2C bus using the same address. This is a violation of the I2C specification, but in practice should work as the SCL clock and SDA data signals are open collector with pull-up resistors. When the same sensor is used for all cameras, the I2C response for all sensors should be the same. You may want hardware support to have a way to also individually address each sensor.

Most important is that all have the same clock. If not, then the cameras may be off by the precision of the crystal oscillator. Typically this is 50ppm. So 1 frame in 20,000 frames at 1080P60.

Software Synchronization

At the software level, each frame is given a timestamp by the Linux kernel V4L2 subsystem, so tracking each frame and matching frames take at the same time by different cameras is easy. The timestamp associated with each frame is maintained by GStreamer all the way through the pipeline. Either GStreamer can be used to invoice the video analytics or to combined video frames with the same timestamp within a defined window. Another option is the individual video streams can be kept separate in the device and analyzed or combined later in a non-realtime manner.