Sony IMX219 Linux driver
|
Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help. |
Overview Video
Sony IMX219 image sensor features
The Sony IMX219 is a CMOS image sensor with the following features:
- CSI2 serial data output (selection of 4 or 2 lanes)
- Max. 30 frame/s in all-pixel scan mode
- 180 frame/s 720p with 2x2 analog (special) binning, 60 frame/s @ 1080p with V-crop
- Data rate: Max. 722 Mbps/lane(@4lane), 912 Mbps/Lane(@2lane)
- Max resolution of 3280 (H) x 2464 (V) approx. 8.08 M pixels
RidgeRun has developed a driver for the Jetson TX1 platform with the following support:
- L4T 24.2 and Jetpack 2.3
- V4l2 Media controller driver
- Tested resolution 3280 x 2464 @ 15 fps
- Tested resolution 1080p @ 47 fps
- Tested resolution 720p @ 78 fps
- Tested resolution 1640x1232 @ 30 fps
- Tested resolution 820x616 @ 30 fps
- Tested with J100 and J20 Auvidea boards.
- Capture with v4l2src and also with nvcamerasrc using the ISP.
Enabling the driver
In order to use this driver, you have to patch and compile the kernel source, and there are two ways to do it:
Using RidgeRun SDK
Through the SDK you can easily patch the kernel and generate an image with the required changes to get the imx219 sensor to work. In this wiki Getting Started Guide for Tegra X1 Jetson you can find all the information required to build a Jetson TX1 SDK from scratch.
In order to add the IMX219 driver, follow these steps:
- Go to your SDK directory
- Go to the kernel directory
- Copy the patches in the patches directory
0001-add-imx219-subdevice-driver.patch 0002-add-imx219-dtb.patch
In the case that you will use the driver with the J20 board, you will need to copy the following patch:
0003-add-j20-board-driver.patch
- Modify the series file in the kernel directory. You have to add the 3 above patches.
- Run make config and select the IMX219 in the Kernel Configuration like this:
-> Kernel Configuration -> Device Drivers -> Multimedia support -> Encoders, decoders, sensors, and other helper chips -> <*> IMX219 camera sensor support
In the case that you will use the driver with the J20 board, you need to select the J20 support Kernel Configuration like this:
-> Kernel Configuration -> Device Drivers -> Multimedia support -> Encoders, decoders, sensors and other helper chips -> <*> Auvidea J20 Expansion board
- Then make the SDK and install it following the Started Guide mentioned before
Using Jetpack
- Follow the instructions in (Downloading the code) to get the kernel source code.
- Once you have the source code, apply the following two patches if you haven't yet, for fixing kernel errors during compilation.
kernel_r7_asm.patch logical_comparison.patch
- Apply the driver patches:
0001-add-imx219-subdevice-driver.patch 0002-add-imx219-dtb.patch
In the case that you will use the driver with the J20 board, you will need to apply the following patch:
0003-add-j20-board-driver.patch
you can apply the patches with the command:
quilt push -a
- Follow the instructions in (Build Kernel) for building the kernel, and then flash the image.
Make sure to enable imx219 driver support:
make menuconfig
-> Device Drivers -> Multimedia support -> Encoders, decoders, sensors and other helper chips -> <*> IMX219 camera sensor support
In the case that you will use the driver with the J20 board, you need to make sure to enable the J20 support:
make menuconfig
-> Device Drivers -> Multimedia support -> Encoders, decoders, sensors and other helper chips -> <*> Auvidea J20 Expansion board
Using the driver
GStreamer examples
The GStreamer version distributed with Jetpack doesn't support Bayer RAW10 only RAW8 so GStreamer needs to be patched in order to capture using v4l2src. Follow the steps in the following wiki page to add the support for RAW10:
http://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1
Important Note: When you are accessing the board through serial or ssh and you want to run a pipeline to display with autovideosink, nveglglessink, xvimagesink or any other video sink, you have to run your pipeline with DISPLAY=:0 at the beginning of the description:
DISPLAY=:0 gst-launch-1.0 ...
Snapshots
In order to check the snapshot, you can use the following tool:
https://github.com/jdthomas/bayer2rgb
So, run the following commands to download the tool and compile it:
git clone git@github.com:jdthomas/bayer2rgb.git cd bayer2rgb make cp bayer2rgb /usr/bin/
Bayer2rgb will convert naked (no header) Bayer grid data into RGB data. There are several choices of interpolation (though they all look essentially the same to my eye). It can output tiff files and can integrate with ImageMagick to output other formats.
- 3280x2464
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
./bayer2rgb --input=test_3280x2464.bayer --output=data.tiff --width=3296 --height=2464 --bpp=16 --first=RGGB --method=BILINEAR --tiff
Use image_magik to convert the tiff to png:
convert data.tiff data.png
- 1920x1080
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
Check the snapshot with:
./bayer2rgb --input=test_1920x1080.bayer --output=data.tiff --width=1920 --height=1080 --bpp=16 --first=RGGB --method=BILINEAR --tiff
Use image_magik to convert the tiff to png:
convert data.tiff data.png
- 1280x720
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1280, height=720" ! filesink location=test_1280x720.bayer
Check the snapshot with:
./bayer2rgb --input=test_1280x720.bayer --output=data.tiff --width=1280 --height=720 --bpp=16 --first=RGGB --method=BILINEAR --tiff
Use image_magik to convert the tiff to png:
convert data.tiff data.png
- 1640x1232 (Binning x2)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer
Check the snapshot with:
./bayer2rgb --input=test_1640x1232.bayer --output=data.tiff --width=1664 --height=1232 --bpp=16 --first=RGGB --method=BILINEAR --tiff
Use image_magik to convert the tiff to png:
convert data.tiff data.png
- 820x616 (Binning x4)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer
Check the snapshot with:
./bayer2rgb --input=test_820x616.bayer --output=data.tiff --width=832 --height=616 --bpp=16 --first=RGGB --method=BILINEAR --tiff
Use image_magik to convert the tiff to png:
convert data.tiff data.png
Capture
V4l2src
You can use the raw2rgbpnm tool to check all the buffers:
https://github.com/martinezjavier/raw2rgbpnm
So, run the following commands to download the tool and compile it:
git clone git clone git@github.com:martinezjavier/raw2rgbpnm.git cd raw2rgbpnm
Open the file raw2rgbpnm.c and change line 489 with:
int c = getopt(argc, argv, "a:b:f:ghs:wn");
This is to enable the option to extract multiple frames from a file. Now, you can build the application:
make
This tool converts from GRBG10 to pnm. We capture RGGB in the IMX219, so you will see that the colors at the output of the image are wrong.
In order to capture 10 buffers and save them in a file, you can run the following pipelines:
- 3280x2464
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
Check the buffers with:
./raw2rgbpnm -f SGRBG10 -s 3296x2464 -b 5.0 -n test_3280x2464.bayer output_3280x2464
- 1920x1080
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
Check the buffers with:
./raw2rgbpnm -f SGRBG10 -s 1920x1080 -b 5.0 -n test_1920x1080.bayer output_1920x1080
- 1280x720
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1280, height=720" ! filesink location=test_1280x720.bayer
Check the buffers with:
./raw2rgbpnm -f SGRBG10 -s 1280x720 -b 5.0 -n test_1280x720.bayer output_1280x720
- 1640x1232 (Binning x2)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer
Check the buffers with:
./raw2rgbpnm -f SGRBG10 -s 1664x1232 -b 5.0 -n test_1640x1232.bayer output_1640x1232
- 820x616 (Binning x4)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer
Check the buffers with:
./raw2rgbpnm -f SGRBG10 -s 832x616 -b 5.0 -n test_820x616.bayer output_820x616
Nvcamerasrc
- 3280x2464
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! autovideosink
- 1920x1080
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
- 1640x1232
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
This is an image captured with the above pipeline:
- 1280x720
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
- 820x616
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)820, height=(int)616, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
Dual Capture
Using the following pipelines we can test the performance of the Jetson TX1 when doing dual capture:
V4l2src
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-bayer,format=rggb,width=3280,height=2464' ! fakesink v4l2src device=/dev/video3 ! 'video/x-bayer,format=rggb,width=3280,height=2464' ! fakesink
We noticed that using two cameras with the max resolution 3280x2464, the CPU load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:
- Tegrastats in normal operation:
RAM 563/3994MB (lfb 748x4MB) cpu [51%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [50%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [50%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [51%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [49%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [53%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0 RAM 563/3994MB (lfb 748x4MB) cpu [52%,0%,0%,0%]@403 GR3D 0%@76 EDP limit 0
- Tegrastats with the above pipeline running
RAM 608/3994MB (lfb 651x4MB) cpu [57%,3%,0%,0%]@825 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [52%,29%,0%,0%]@403 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [56%,0%,1%,0%]@403 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [60%,1%,1%,0%]@403 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [57%,1%,0%,0%]@403 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [58%,0%,1%,0%]@403 GR3D 0%@76 EDP limit 0 RAM 608/3994MB (lfb 651x4MB) cpu [58%,1%,0%,0%]@403 GR3D 0%@76 EDP limit 0
Nvcamerasrc
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! \ fakesink nvcamerasrc sensor-id=2 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! fakesink
These tests were done using the J20 board from Auvidea Getting started guide for Auvidea J20 board
Dual capture and dual display
DISPLAY=:0 gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \ framerate=(fraction)21/1' ! nvegltransform ! nveglglessink nvcamerasrc sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, \ height=(int)1232, format=(string)I420, framerate=(fraction)21/1' ! nvegltransform ! nveglglessink -e
Video Encoding Transport Stream 1640x1232@30fps
CAPS="video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1" gst-launch-1.0 nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=500 ! capsfilter caps="$CAPS" ! omxh264enc ! \ mpegtsmux ! filesink location=test.ts
Snapshots
gst-launch-1.0 -v nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=100 ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \ framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! multifilesink location=test_%d.yuv
See also
For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.
Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.