OmniVision OS08A10 Linux driver

From RidgeRun Developer Wiki




Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help.


Driver List Information
Refer to the RidgeRun Linux Camera Drivers to meet all the list of Drivers available


OmniVision OS08A10 image sensor features

The Omnivision OS08A10 is an image sensor with the following features:

  • 2 µm x 2 µm pixel
  • Optical size of 1/1.8"
  • Programmable controls for:
    • Frame rate
    • Mirror and flip
    • Cropping
    • Windowing
  • Supports output formats:
    • 12-/10-bit RGB RAW
  • Supports image sizes:
    • 4K2K (3840x2160)
    • 2560 x 1440
    • 1080p (1920x1080)
    • 720p (1280x720)
  • Supports 2x2 binning
  • Standard serial SCCB interface
  • 12-bit ADC
  • Up to 4-lane MIPI/LVDS serial output interface (supports maximum speed up to 1500 Mbps/lane)
  • 2-exposure staggered HDR support
  • Programmable I/O drive capability
  • Light sensing mode (LSM)
  • PLL with SCC support
  • Support for FSIN

RidgeRun has developed a driver for the Jetson TX1 platform with the following support:

  • L4T 28.2.1 and Jetpack 3.3
  • V4l2 Media controller driver
  • Tested resolution 3840 x 2160 @ 30 fps
  • Capture with nvcamerasrc using the ISP.
  • Three cameras capturing at same time.

Enabling the OmniVision OS08A10 Linux driver

In order to use this driver, you have to patch and compile the kernel source using JetPack:

  • Once you have the source code, apply the following three patches in order to add the changes required for the os08a10 cameras at kernel and dtb level.
3.2.1_os08a10.patch
  • Follow the instructions in (Build Kernel) for building the kernel, and then flash the image.

Make sure to enable os08a10 driver support:

make menuconfig
-> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors and other helper chips
       -> <*> OS08A10 camera sensor support

Using the driver

Gstreamer examples

Important Note: When you are accessing to the board through serial or ssh and you want to run a pipeline to display with autovideosink, nveglglessink, xvimagesink or any other video sink, you have to run your pipeline with DISPLAY=:0 at the beginning of the description:

DISPLAY=:0 gst-launch-1.0 ...

Capture

Nvcamerasrc
  • 3840x2160
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! autovideosink

Dual Capture

Using the following pipelines we can test the performance of the Jetson TX2 when doing dual capture:

Nvcamerasrc
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! \
fakesink nvcamerasrc sensor-id=1 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! \
 fakesink

We noticed that using two cameras with the max resolution 3840x2160, the CPU load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:

  • Tegrastats in normal operation:
RAM 2037/7847MB (lfb 1268x4MB) CPU [0%@345,off,off,0%@345,0%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [3%@345,off,off,1%@345,2%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [6%@345,off,off,0%@345,2%@345,1%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [0%@345,off,off,4%@345,0%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [6%@345,off,off,3%@345,3%@345,2%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [2%@345,off,off,0%@345,1%@345,3%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [1%@345,off,off,0%@345,2%@345,1%@345]
  • Tegrastats with the above pipeline running
RAM 1999/7847MB (lfb 1305x4MB) CPU [31%@345,off,off,29%@345,26%@345,26%@345] 
RAM 1999/7847MB (lfb 1305x4MB) CPU [28%@345,off,off,31%@345,26%@345,30%@345] 
RAM 1999/7847MB (lfb 1305x4MB) CPU [27%@345,off,off,28%@345,28%@345,33%@345] 
RAM 2000/7847MB (lfb 1305x4MB) CPU [26%@345,off,off,29%@345,26%@345,26%@345] 
RAM 2000/7847MB (lfb 1305x4MB) CPU [26%@345,off,off,29%@345,30%@345,28%@345] 
RAM 2000/7847MB (lfb 1305x4MB) CPU [30%@345,off,off,28%@345,28%@345,27%@345] 
RAM 2001/7847MB (lfb 1304x4MB) CPU [32%@345,off,off,22%@345,27%@345,28%@345] 
RAM 2001/7847MB (lfb 1304x4MB) CPU [33%@345,off,off,26%@345,26%@345,28%@345] 
RAM 2001/7847MB (lfb 1304x4MB) CPU [30%@345,off,off,26%@345,29%@345,22%@345] 
RAM 2001/7847MB (lfb 1304x4MB) CPU [28%@345,off,off,29%@345,27%@345,28%@345] 
RAM 2001/7847MB (lfb 1304x4MB) CPU [29%@345,off,off,33%@345,28%@345,26%@345]

Three cameras

Using the following pipelines we can test the performance of the Jetson TX2 when using three cameras to capture:

Nvcamerasrc
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, \
framerate=(fraction)30/1' ! fakesink nvcamerasrc sensor-id=1 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, \
format=(string)I420, framerate=(fraction)30/1' ! fakesink nvcamerasrc sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, \
height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! fakesink

We noticed that using three cameras with the max resolution 3840x2160, the CPU load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:

  • Tegrastats in normal operation:
RAM 2037/7847MB (lfb 1268x4MB) CPU [0%@345,off,off,0%@345,0%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [3%@345,off,off,1%@345,2%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [6%@345,off,off,0%@345,2%@345,1%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [0%@345,off,off,4%@345,0%@345,0%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [6%@345,off,off,3%@345,3%@345,2%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [2%@345,off,off,0%@345,1%@345,3%@345] 
RAM 2037/7847MB (lfb 1268x4MB) CPU [1%@345,off,off,0%@345,2%@345,1%@345]
  • Tegrastats with the above pipeline running
RAM 2685/7855MB (lfb 1135x4MB) CPU [41%@345,off,off,38%@345,40%@345,43%@345]
RAM 2686/7855MB (lfb 1135x4MB) CPU [44%@345,off,off,43%@345,39%@345,40%@345]
RAM 2687/7855MB (lfb 1133x4MB) CPU [44%@345,off,off,41%@345,40%@345,41%@345]
RAM 2687/7855MB (lfb 1133x4MB) CPU [45%@345,off,off,43%@345,43%@345,36%@345]
RAM 2688/7855MB (lfb 1133x4MB) CPU [43%@345,off,off,39%@345,43%@345,42%@345]
RAM 2689/7855MB (lfb 1133x4MB) CPU [43%@345,off,off,41%@345,38%@345,41%@345]
RAM 2689/7855MB (lfb 1133x4MB) CPU [45%@345,off,off,40%@345,38%@345,41%@345]

Dual capture and dual display

DISPLAY=:0 gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \
framerate=(fraction)21/1' ! nvegltransform ! nveglglessink nvcamerasrc sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, \
height=(int)1232, format=(string)I420, framerate=(fraction)21/1' ! nvegltransform ! nveglglessink -e

Video Encoding Transport Stream 3840x2160@30fps

CAPS="video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1"

gst-launch-1.0 nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=500 ! capsfilter caps="$CAPS" ! omxh264enc ! \
               mpegtsmux ! filesink location=test.ts

Snapshots

gst-launch-1.0 -v nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=10 ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, \
framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! multifilesink location=test_%d.yuv

Framerate

Using the next pipeline we were able to measure the framerate for single capture with perf element:

gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! perf  ! fakesink
nvidia@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! perf  ! fakesink
Setting pipeline to PAUSED ...

Available Sensor modes : 
3840 x 2160 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=12 DynPixelBitDepth=12
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 3840x2160 FrameRate = 30.000000 ...

GST-PERF INFO -->  Timestamp: 0:16:43.009877206; Bps: 0; fps: 0.0 
GST-PERF INFO -->  Timestamp: 0:16:44.011800118; Bps: 807; fps: 34.96 
GST-PERF INFO -->  Timestamp: 0:16:45.039982896; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:46.068519581; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:47.097418847; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:48.125239768; Bps: 786; fps: 34.7 
GST-PERF INFO -->  Timestamp: 0:16:49.153094886; Bps: 786; fps: 34.7 
GST-PERF INFO -->  Timestamp: 0:16:50.181542408; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:51.211657663; Bps: 784; fps: 33.98 
GST-PERF INFO -->  Timestamp: 0:16:52.236531858; Bps: 789; fps: 34.17 
GST-PERF INFO -->  Timestamp: 0:16:53.239527665; Bps: 806; fps: 33.93 
GST-PERF INFO -->  Timestamp: 0:16:54.266731117; Bps: 786; fps: 34.7 
GST-PERF INFO -->  Timestamp: 0:16:55.293416318; Bps: 787; fps: 34.11 
GST-PERF INFO -->  Timestamp: 0:16:56.321500133; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:57.350354047; Bps: 785; fps: 34.4 
GST-PERF INFO -->  Timestamp: 0:16:58.377735892; Bps: 786; fps: 34.7 
GST-PERF INFO -->  Timestamp: 0:16:59.380256442; Bps: 806; fps: 33.93 
GST-PERF INFO -->  Timestamp: 0:17:00.406086717; Bps: 788; fps: 34.14 
GST-PERF INFO -->  Timestamp: 0:17:01.433547314; Bps: 786; fps: 34.7 
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:19.198055964
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

The results show the framerate constant at 34FPS that use nvcamerasrc and passing frames through the ISP to convert from Bayer to YUV.

Jetson OS08A10 glass to glass latency

The latency glass to glass for the Os08a10 camera is around 130ms, the image below is one of the pictures taken when the latency was calculated. The left chronometer is the image that the camera took and the right chronometer belongs to the real-time chronometer that the camera was capturing at the moment

OS08A10 Glass to Glass latency.


For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.


Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.