V4L2 driver for camera sensor or capture chip
|
Camera Capture Drivers
RidgeRun has more than 12 years of experience creating custom Linux V4L2 drivers for embedded systems. The customer selects the hardware sensor or chip and RidgeRun creates the V4L2 driver for it. This wiki describes the services provided by RidgeRun to create a V4L2 driver for your system as well as some of the considerations related to time frame, documentation, hardware, etc. Contact Us section provides info on how to reach the RidgeRun team.
V4L2 Driver
V4L2 is the official Linux Kernel API to handle capture devices like camera sensors, video decoders, or FPGAs feeding video frames to the SoC. The video frames can come from the component, composite, HDMI, or SDI, or frames from other video encoding standards.
The V4L2 framework defines the API the Linux camera driver supports in order to be V4L2 compliant. The Linux kernel uses the camera driver to initialize the hardware and produce video frames. Each of these functions has a specific implication for the camera sensor. Often the driver interacts with the camera sensor, receiver chip, or FPGA using by reading and writing I2C or SPI registers.
Creating a Linux camera driver consists of four steps:
- Subdevice driver - camera sensor configuration via I2C, SPI, or other low-level communication to initialize sensor and support different resolutions. RidgeRun custom drivers support one resolution, others can be added as needed.
- Device tree modification
- Capture subsystem configuration and video node creation (/dev/video):
- In NVIDIA Jetson involves code needed to configure Video Input (VI) to receive the video coming from the camera. Support to capture from v4l2, libargus, and nvarguscamerasrc (YUV).
- In NXP i.MX8 involves creating V4L2 sub device drivers, and IPU configuration, depending on i.MX8 version.
- In UltraScale+ involves adding the code to configure the VPSS to receive the video coming from the sensor. It might require some work on the PL.
- In NXP i.MX6 this is the IPU configuration.
- Application Support:
- Add support to one application like GStreamer, Yavta, etc to grab the frames available in the video node (/dev/video), sometimes this involves creating software patches to support custom color spaces.
Camera sensor resolutions and controls
- V4L2 device drivers (for camera sensors, GMSL2, FPD-Link, etc) are developed on T&M basis, the customer is only required to provide their driver requirements and we provide a quote of the estimated effort.
- RidgeRun also provides services to extend the driver to support additional controls like auto white balance, contrast, exposure time, etc if the sensor has these capabilities as well as multiple sensor/chips support.
- In the case of NVIDIA Jetson, RidgeRun will use the default ISP calibration. Please notice that once the driver is in place you might need to create a custom ISP calibration file for your sensor if you need to use the built-in ISP. NVIDIA gives access to the ISP calibration tools only to ODMs, so companies like D3 engineering and Leopard Imaging can create this file for you if the default settings don't produce the expected image quality.
Devices that need V4L2 drivers
Some of the devices that might need a V4L2 driver are:
- Camera Sensor from different vendors like Sony, Aptina, Omnivision, etc
- SDI receivers like Gennum
- HDMI receivers
- GMSL or FPD Link chips to extend the physical connection of the camera to the SoC
- Composite or component decoders
- FPGA feeding video
Alternative Capture Drivers
There are a number of alternative video sources that do not expose a direct MIPI-CSIx interface; a very popular option relies on using a HDMI video source interfaced to a HDMI-to-CSIx bridge. RidgeRun has developed drivers for common used chips such as the following:
- Toshiba TC358743
- Toshiba TC358840
- Toshiba TC358746
- Lontium LT6911UXC
EDID Support on HDMI capture drivers
If your capture chip is an HDMI receiver please ask RidgeRun about the EDID (Extended Display Identification Data) Support for your driver because getting the chip working with your camera might require additional work due to EDID requirements. This section explains some of the work that is required.
EDID Background
The EDID is like a descriptor (multiple HEX numbers) of the resolutions that are supported by the HDMI receiver, for instance, the TC358840 chip, so the camera or video source knows what resolutions it can output. This EDID information is sent through the DDC which is a kind of i2c [1].
The EDID descriptor has multiple revisions or versions [2] and therefore not all the cameras are able to parse all the same EDID or versions. This is why it is important to know which video source you will use. We have seen cases where our EDID works with multiple cameras but one camera doesn't like it so we have to start modifying it until we get it working. Furthermore, in some cases, the camera manufacturer doesn't pay attention to the EDID and just outputs a default resolution.
One option, for instance, is to connect your cameras to a specific monitor. If all the output is 1080p60 then we could try copying the EDID from your monitor and put it in our driver after some modifications because your monitor EDID will report likely a huge amount of resolutions supported that are not supported by the TC358840.
In Linux, after connecting the monitor you can read the EDID using these commands:
- EDID
Display EDID information for each display xrandr --props
Display EDID information for specific display
cat /sys/class/drm/card0-HDMI-A-1/edid | hexdump
Decode EDID information
sudo apt-get install edid-decode cat /sys/class/drm/card0-HDMI-A-1/edid | edid-decode
In Windows, there are tools to take the EDID and edit it, but since there are multiple versions not always all the tools will be able to decode it and edit it.
[1] https://en.wikipedia.org/wiki/HDMI
[2] https://www.extron.com/company/article.aspx?id=uedid
Linux V4L2 driver delivery
Once the V4L2 driver development is completed, RidgeRun provides the source code of the driver as well as a wiki page with instructions about how to compile and test the driver, normally using applications like GStreamer or Yavta with performance measurements like ARM load and frames per seconds.
Documentation Required
- In order to complete the driver RidgeRun needs access to the documentation that describes how to configure the sensor or receiver, this happens normally through i2c or SPI registers unless your driver is a V4L2 driver for an FPGA. For this reason, RidgeRun has NDA with:
- Omnivision
- Maxim
- Sony
- Framos
- Aptina
- Toshiba
- Although it is not mandatory it is useful to provide the schematics for your board to understand better how are the video receivers connected, details like i2c bus, MIPI CSI2 port, parallel port, a clock signal, etc, help RidgeRun engineers to create your driver faster and in some case to detect hardware issues.
Hardware
- RidgeRun needs remote or physical access to the hardware to create and test the driver. RidgeRun assumes that there are no hardware issues that would delay the development process (and increase costs). In case of problems with your hardware, RidgeRun will bill up to 20 hours of engineer's services for the time needed to inform you of what is wrong.
- Once we are done with the driver the hardware is shipped back to the customer, if requested, at the customer's expense.
Time frame
- Creating a basic V4L2 driver from scratch requires between 3 to 4 weeks. During this period partial deliveries are provided to the customers as well as progress updates. In case of situations blocking the progress (like hardware issues), these are informed to the customer as well.
See also
For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.
Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.