FPGA Image Signal Processor/GStreamer Pipelines: Difference between revisions

From RidgeRun Developer Wiki
No edit summary
mNo edit summary
 
(11 intermediate revisions by 4 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{FPGA Image Signal Processor/Head|next=Supported_Platforms|previous=Examples/Examples|keywords=}}
{{FPGA Image Signal Processor/Head|next=GStreamer_Pipelines/Debayer|previous=Examples/FFMPEG|metakeywords=GStreamer Pipelines}}
</noinclude>
</noinclude>


Line 7: Line 7:
-->
-->


=Overview=
==Overview==


The [[V4L2 FPGA|V4L2-FPGA]] project makes possible to use the FPGA-ISP modules with GStreamer. Depending on the accelerator, you can adjust the input/output format of the streaming. Also, the accelerators support ranges of sizes, making them versatile for almost any resolution, including non-standard sizes.
The [[V4L2 FPGA|V4L2-FPGA]] project makes it possible to use the FPGA-ISP modules with GStreamer. Depending on the accelerator, you can adjust the input/output format of the streaming. Also, the accelerators support ranges of sizes, making them versatile for almost any resolution, including non-standard sizes.


With FPGA-ISP, you can accelerate your image processing application with GStreamer without sacrificing your CPU resources and letting the FPGA do the job for you.
With FPGA-ISP, you can accelerate your image processing application with GStreamer without sacrificing your CPU resources and letting the FPGA do the job for you.


=Typical applications=
==Typical applications==


Let's suppose that our FPGA application is based on an ''[[FPGA_Image_Signal_Processor/Introduction/Overview#FPGA_as_an_accelerator|accelerator]]'' scheme. This implies that there will be two pipelines; one for transferring the data to the FPGA and other for receiving the processed data from it.
Let's suppose that our FPGA application is based on an ''[[FPGA_Image_Signal_Processor/Introduction/Overview#FPGA_as_an_accelerator|accelerator]]'' scheme. This implies that there will be two pipelines; one for transferring the data to the FPGA and another for receiving the processed data from it.


== Sink (sender) GStreamer Pipeline ==
=== Sink (sender) GStreamer Pipeline ===


It makes the most of the ''v4l2sink'' element, which receives the frame and transfers it to the FPGA aid by the V4L2-FPGA driver. The typical caps to set in the pipeline are:
It makes the most of the ''v4l2sink'' element, which receives the frame and transfers it to the FPGA aid by the V4L2-FPGA driver. The typical caps to set in the pipeline are:
Line 27: Line 27:
</pre>
</pre>


Those caps are meant to set the input image on the accelerator. In most of the cases, the accelerator will mirror these properties to its output, given that currently they do not support image scaling.
Those caps are meant to set the input image on the accelerator. In most of the cases, the accelerator will mirror these properties to its output, given that currently, they do not support image scaling.


The pipeline, therefore, has the following structure:
The pipeline, therefore, has the following structure:
Line 37: Line 37:
FORMAT=ARGB
FORMAT=ARGB


# Pipeline
# GStreamer pipeline launcher
PIPE="videotestsrc ! video/x-raw,width=$WIDTH,height$HEIGHT,format=$FORMAT ! v4l2sink device=/dev/video1"
gst-launch-1.0 videotestsrc ! video/x-raw,width=$WIDTH,height=$HEIGHT,format=$FORMAT ! v4l2sink device=/dev/video1
</syntaxhighlight>
 
Also, it is possible to use the GStreamer caps autonegotiation:


<syntaxhighlight lang=bash>
# GStreamer pipeline launcher
# GStreamer pipeline launcher
gst-launch-1.0 $PIPE
gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video1
</syntaxhighlight>
</syntaxhighlight>


== Source (receiver) GStreamer pipeline ==
=== Source (receiver) GStreamer pipeline ===


The source receives the processed data from the FPGA. It uses the ''v4l2src'' element to do this task. The process is quite similar to the sink pipeline, given that defining the caps is also required. Let's suppose that we are using the ''sink'' specified above, therefore, the corresponding source pipeline is:
The source receives the processed data from the FPGA. It uses the ''v4l2src'' element to do this task. The process is quite similar to the sink pipeline, given that defining the caps is also required. Let's suppose that we are using the ''sink'' specified above, therefore, the corresponding source pipeline is:
Line 53: Line 57:
HEIGHT=1080
HEIGHT=1080
FORMAT=ARGB
FORMAT=ARGB
# Pipeline
PIPE="v4l2src device=/dev/video2 ! video/x-raw,width=$WIDTH,height$HEIGHT,format=$FORMAT ! videoconvert ! xvimagesink"


# GStreamer pipeline launcher
# GStreamer pipeline launcher
gst-launch-1.0 $PIPE
gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-raw,width=$WIDTH,height=$HEIGHT,format=$FORMAT ! videoconvert ! xvimagesink
</syntaxhighlight>
</syntaxhighlight>


== Execution order ==
=== Execution order ===


By defining the caps, it is possible to execute the pipelines in any order. However, because of GStreamer negotiation, is recommended to start the ''sink'' pipeline first and before of ''source'' pipeline.  
By defining the caps, it is possible to execute the pipelines in any order. However, because of GStreamer negotiation, is recommended to start the ''sink'' pipeline first and before of ''source'' pipeline.  


Each accelerator has its supported properties specified in its page. Please, refer to [[FPGA_Image_Signal_Processor/Modules|FPGA ISP Accelerators]]
Each accelerator has its supported properties specified on its page. Please, refer to [[FPGA_Image_Signal_Processor/Modules|FPGA ISP Accelerators]]


=FPGA-ISP modules test pipelines=
==FPGA-ISP modules test pipelines==


Please, refer to each accelerator for testing pipelines:
Please, refer to each accelerator for testing pipelines:


[[FPGA_Image_Signal_Processor/GStreamer_Pipelines/Debayer|Debayer]]
* [[FPGA_Image_Signal_Processor/GStreamer_Pipelines/Debayer|Debayer]]
 
* [[FPGA_Image_Signal_Processor/GStreamer_Pipelines/AutoWhiteBalance|Auto White Balance]]
 
* [[FPGA Image Signal Processor/GStreamer_Pipelines/ColorSpaceConversion| Color Space Conversion]]


[[FPGA_Image_Signal_Processor/GStreamer_Pipelines/AutoWhiteBalance|Auto White Balance]]
* [[FPGA Image Signal Processor/GStreamer_Pipelines/HistogramEqualizer| Histogram Equalizer]]


[[FPGA Image Signal Processor/GStreamer_Pipelines/ColorSpaceConversion| Color Space Conversion]]
* [[FPGA Image Signal Processor/GStreamer_Pipelines/GeometricTransformationUnit| Geometric Transformation Unit]]


<noinclude>
<noinclude>
{{FPGA Image Signal Processor/Foot|Examples/Examples|Supported_Platforms}}
{{FPGA Image Signal Processor/Foot|Examples/FFMPEG|GStreamer_Pipelines/Debayer}}
</noinclude>
</noinclude>

Latest revision as of 04:05, 4 March 2023



Previous: Examples/FFMPEG Index Next: GStreamer_Pipelines/Debayer




Overview

The V4L2-FPGA project makes it possible to use the FPGA-ISP modules with GStreamer. Depending on the accelerator, you can adjust the input/output format of the streaming. Also, the accelerators support ranges of sizes, making them versatile for almost any resolution, including non-standard sizes.

With FPGA-ISP, you can accelerate your image processing application with GStreamer without sacrificing your CPU resources and letting the FPGA do the job for you.

Typical applications

Let's suppose that our FPGA application is based on an accelerator scheme. This implies that there will be two pipelines; one for transferring the data to the FPGA and another for receiving the processed data from it.

Sink (sender) GStreamer Pipeline

It makes the most of the v4l2sink element, which receives the frame and transfers it to the FPGA aid by the V4L2-FPGA driver. The typical caps to set in the pipeline are:

width
height
format

Those caps are meant to set the input image on the accelerator. In most of the cases, the accelerator will mirror these properties to its output, given that currently, they do not support image scaling.

The pipeline, therefore, has the following structure:

# Properties
WIDTH=1920
HEIGHT=1080
FORMAT=ARGB

# GStreamer pipeline launcher
gst-launch-1.0 videotestsrc ! video/x-raw,width=$WIDTH,height=$HEIGHT,format=$FORMAT ! v4l2sink device=/dev/video1

Also, it is possible to use the GStreamer caps autonegotiation:

# GStreamer pipeline launcher
gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video1

Source (receiver) GStreamer pipeline

The source receives the processed data from the FPGA. It uses the v4l2src element to do this task. The process is quite similar to the sink pipeline, given that defining the caps is also required. Let's suppose that we are using the sink specified above, therefore, the corresponding source pipeline is:

# Properties
WIDTH=1920
HEIGHT=1080
FORMAT=ARGB

# GStreamer pipeline launcher
gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-raw,width=$WIDTH,height=$HEIGHT,format=$FORMAT ! videoconvert ! xvimagesink

Execution order

By defining the caps, it is possible to execute the pipelines in any order. However, because of GStreamer negotiation, is recommended to start the sink pipeline first and before of source pipeline.

Each accelerator has its supported properties specified on its page. Please, refer to FPGA ISP Accelerators

FPGA-ISP modules test pipelines

Please, refer to each accelerator for testing pipelines:


Previous: Examples/FFMPEG Index Next: GStreamer_Pipelines/Debayer