23,772
edits
mNo edit summary |
mNo edit summary |
||
Line 4: | Line 4: | ||
|description={{{description|With the Jetson TX1, you can capture up to 6 video streams from 6 different cameras. Learn about how synchronizing multiple cameras works now.}}} | |description={{{description|With the Jetson TX1, you can capture up to 6 video streams from 6 different cameras. Learn about how synchronizing multiple cameras works now.}}} | ||
}} | }} | ||
{{NVIDIA Pref Partner logo and RR Contact}} | |||
{| | |||
|- | |||
| {{NVIDIA Pref Partner logo and RR Contact}} | |||
|} | |||
With the Jetson TX1, you can capture up to 6 video streams from 6 different cameras. For video analytics, it is often important that the camera sensors capture the frames ''at the same time''. The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done. | With the Jetson TX1, you can capture up to 6 video streams from 6 different cameras. For video analytics, it is often important that the camera sensors capture the frames ''at the same time''. The tolerance for the difference in capture time can vary from +/- one frame to down to a nanoseconds difference from when the first pixel in the frame is captured, depending on the frame rate and analytics being done. | ||
Line 16: | Line 16: | ||
Thanks to Jürgen Stelbrink at Auvidea for much of the following hardware information, which has been paraphrased below. | Thanks to Jürgen Stelbrink at Auvidea for much of the following hardware information, which has been paraphrased below. | ||
Ideally, you start by choosing a sensor designed to support perfectly in-sync video capture. | Ideally, you start by choosing a sensor designed to support perfectly in-sync video capture. One sensor would be put in master mode, generating the critical timing signals, and the other sensors put in slave mode. | ||
If your sensor of choice doesn't support a master / slave configuration, then you have to simulate such an arrangement. One camera is the master and supplies the clock to the other cameras. So the clock oscillator on the slave cameras is removed and the clock (signal and ground) to connected from the master to the slaves. The cameras now have a common in-sync clock source. Next, you need to get the start-of-exposure for each camera synchronized. If the sensor doesn't support a start-of-exposure pin, then the sensor needs to be controlled via registers, typically using the I2C bus. The simplest solution is to have all the sensors connected to the same I2C bus using the same address. This is a violation of the I2C specification, but in practice should work as the SCL clock and SDA data signals are open collector with pull-up resistors. When the same sensor is used for all cameras, the I2C response for all sensors should be the same. You may want hardware support to have a way to also individually address each sensor. | If your sensor of choice doesn't support a master / slave configuration, then you have to simulate such an arrangement. One camera is the master and supplies the clock to the other cameras. So the clock oscillator on the slave cameras is removed and the clock (signal and ground) to connected from the master to the slaves. The cameras now have a common in-sync clock source. Next, you need to get the start-of-exposure for each camera synchronized. If the sensor doesn't support a start-of-exposure pin, then the sensor needs to be controlled via registers, typically using the I2C bus. The simplest solution is to have all the sensors connected to the same I2C bus using the same address. This is a violation of the I2C specification, but in practice should work as the SCL clock and SDA data signals are open collector with pull-up resistors. When the same sensor is used for all cameras, the I2C response for all sensors should be the same. You may want hardware support to have a way to also individually address each sensor. |