FPGA Image Signal Processor - Introduction - Overview

From RidgeRun Developer Wiki



Previous: Introduction Index Next: Modules




Overview

The FPGA Image Signal Processor project serves as an extension of the V4L2 FPGA project, which makes you able to communicate to the FPGA accelerators using V4L2 standard devices and take advantage of GStreamer capabilities. With FPGA ISP, it is possible to build image processing pipelines tailored for your applications, reducing the CPU load and leading it to perform other important tasks.

FPGA ISP comes here to offer video processing accelerators commonly found in image signal processors, such as demosaicing, histogram equalization, auto white balancing and color space conversion. It is possible to couple your own accelerators to FPGA ISP, which allows you to connect FPGA ISP directly to your camera, preprocess the image and send the final result to your CPU, reducing the transmission overhead and receiving an image ready to use.

Our core? Xilinx High-Level Synthesis. A powerful framework which enables us to have complex image processing solutions for you faster than implementing it in RTL with Verilog or VHDL. Besides, it makes easier to you to adapt FPGA ISP to your needs, leading to a less time to market and exploiting the potential of FPGAs.

Application examples

Depending on your application and hardware setup, you can use FPGA ISP in frame grabber or filter mode. Let's have a look at the possibilities.

FPGA as an accelerator

The FPGA can work as an accelerator, which can receive data from the CPU/Memory and send them back already processed and ready to use. For instance, let's consider we are capturing data from a sensor connected to our platform through MIPI and delivers data in RAW8. However, you will need the data in a more useful format such as RGBA. With the debayer module you can implement the following architecture:

In this case, we suppose we are using a GStreamer data to do the job. The RAW image is captured by the sensor and sent to the embedded system. It sends the incoming RAW image to the accelerator through a V4L2-Sink (v4l2sink) element from the V4L2 plug-ins. This element is in charge of receiving the RAW data and transmit them to the FPGA device through PCI-e, helped by the RidgeRun's V4L2-FPGA driver.

The FPGA, after receiving the RAW data, performs the demosaicing of the picture and sends it back to the embedded system in RGBA format. Another GStreamer pipeline retrieves the received RGBA data ready to use through a V4L2-Source (v4l2src) element for other applications, such as GstInference, Gst-QT-Overlay, and others.

FPGA as preprocessor

You can also use FPGA ISP in your custom FPGA-based hardware. You could have your sensor interface connected to your FPGA directly by using a hard-core and use FPGA ISP to pre-process the data before sending it to the processing unit. It saves you from receiving RAW data which need to be pre-processed before using the information.

In the diagram presented above, a simple application of FPGA as preprocessor it is possible to achieve. The camera is, in this case, connected directly to the FPGA, which is supposed to already have a MIPI hard-core inside and it is possible to access through any port standard. If the port or standard do not match with the FPGA ISP standard, it is possible to prepare an adapter by using either HDL or HLS to, then, connect it to your image preprocessor.

Current modules

FPGA-ISP is still under development. We currently have the following modules integrated into the project:

  • Debayer
  • Auto white balancer
  • RGBA <-> UYVY color space converter
  • Histogram Equalizer
  • Noise reduction (under development)
  • Bad pixel correction (under development)

RidgeRun also offers the IP core development with Vivado HLS. Inside our services, you can hire us to develop a new module for your custom application. For more information, please, contact us:

Building your own ISP

FPGA-ISP modules can work either individually or in combination. You can prepare a pipeline just including the module library and calling the module as it was a function. For example, to implement the following pipeline:

You can use a code similar to:

#include "core/rr_types.hpp"
#include "fpga-isp/debayer.hpp"
#include "fpga-isp/awb.hpp"
#include "fpga-isp/csc.hpp"

void my_custom_accelerator(RR_stream_port input, RR_stream_port output, 
        RR_dim_counter_type width, RR_dim_counter_type height, RR_format_type format)
{
    RR_stream_port demosaiced, balanced;
#pragma HLS STREAM variable=demosaiced dim=1 depth=1
#pragma HLS STREAM variable=balanced dim=1 depth=1

#pragma HLS DATAFLOW

    debayer(input, demosaiced, width, height, format);
    awb(demosaiced, balanced, width, height);
    convertARGBtoUYVY(balanced, output, width, height);
}

In this way, your pipeline is prepared to give you the maximum performance, making the most of pipelining techniques and bus-width usage.

Requisites

In order to build your own pipelines, you will need:

  • Vivado HLx (edition depending on your platform)
  • V4L2-FPGA
  • This product


Previous: Introduction Index Next: Modules