IMX6 GStreamer Pipelines - SDK Irazu
Problems running the pipelines shown on this page? |
Introduction
The following examples show the usage of GStreamer with the RR iMX6 SDK Irazu for all the supported iMX6 based boards.
Our SDK is currently supporting the latest gstreamer 0.10 as well as the Freescale plugins 3.0.7 and for the video capturing part it is able to capture from 2 cameras at the same time at 720p30. The following sections show you some pipeline examples demonstrating its capabilities.
Live Preview
Display videotest pattern
gst-launch -v videotestsrc ! mfw_v4lsink
tested: 20130530
Display only video
This pipeline supports all the containers supported by aiurdemux and formats supported by vpudec.
Containers: ogg, matroska, webm, quicktime, m4a, 3gp, mpeg and flv.
Formats: H264, H263, mpeg
gst-launch -v filesrc location= <testvideo> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink
tested: 20130530
Video loopback
Single Camera
gst-launch -v mfw_v4lsrc ! mfw_v4lsink
tested: 20130601
Dual Camera
gst-launch -v mfw_v4lsrc capture-mode=4 ! mfw_v4lsink mfw_v4lsrc capture-mode=4 device=/dev/video1 ! mfw_v4lsink device=/dev/video18
Note: For this pipeline you need a Sabre-SDP with 2 cameras and 2 monitors.
720p60 loopback
gst-launch mfw_v4lsrc device=/dev/video0 capture-mode=3 fps-n=60 queue-size=7 ! queue max-size-buffers=3 ! mfw_isink disp-width=1280 disp-height=720
Note: this pipeline was tested with the TC358743 HDMI to CSI2 bridge.
1080p60 loopback
gst-launch mfw_v4lsrc device=/dev/video0 capture-mode=5 fps-n=60 queue-size=7 ! queue max-size-buffers=3 ! mfw_isink disp-width=1920 disp-height=1080
Note: this pipeline was tested with the TC358743 HDMI to CSI2 bridge.
Video Decoding
Video + Audio decoding
gst-launch filesrc location= <testvideo> typefind=true ! aiurdemux name=demux demux. \ ! queue max-size-buffers=0 max-size-time=0 ! vpudec ! mfw_v4lsink demux. ! queue max-size-buffers=0 max-size-time=0 ! \ beepdec ! audioconvert ! 'audio/x-raw-int, channels=2' ! alsasink
Video Encoding
Videotestsrc H-264 codec in avi container
gst-launch videotestsrc ! queue ! vpuenc codec=6 ! matroskamux ! filesink location=test.avi
tested: 20130531
H-264 codec in avi container 720p30
gst-launch mfw_v4lsrc capture-mode=4 fps-n=30 ! queue ! vpuenc codec=6 ! matroskamux ! filesink location=test.avi
H-264 codec in avi container 720p60
gst-launch mfw_v4lsrc capture-mode=4 fps-n=60 ! queue ! vpuenc codec=6 ! matroskamux ! filesink location=test.avi
H-264 codec in avi container 1080p15
gst-launch mfw_v4lsrc capture-mode=5 fps-n=15 ! queue ! vpuenc codec=6 ! matroskamux ! filesink location=test.avi
Video Transcoding
QuickTime to mkv
gst-launch filesrc location= input.mov typefind=true ! aiurdemux ! vpudec ! vpuenc ! matroskamux ! filesink location= output.mkv
tested: 20130601
Debug Message Log while running the above pipeline:
/ # gst-launch filesrc location= sample_sorenson.mov typefind=true ! aiurdemux ! vpudec ! vpuenc ! matroskamux ! filesink location= output.mkv Setting pipeline to PAUSED ... [INFO] Product Info: i.MX6Q/D/S vpuenc versions :) plugin: 3.0.5 wrapper: 1.0.28(VPUWRAPPER_ARM_LINUX Build on May 23 2013 16:08:16) vpulib: 5.4.10 firmware: 2.1.8.34588 [INFO] Product Info: i.MX6Q/D/S vpudec versions :) plugin: 3.0.5 wrapper: 1.0.28(VPUWRAPPER_ARM_LINUX Build on May 23 2013 16:08:16) vpulib: 5.4.10 firmware: 2.1.8.34588 Pipeline is PREROLLING ... Aiur: 3.0.5 Core: MPEG4PARSER_06.04.25 build on Dec 10 2012 16:29:48 mime: video/quicktime; audio/x-m4a; application/x-3gp file: /usr/lib/imx-mm/parser/lib_mp4_parser_arm11_elinux.so.3.1 Content Info: URI: file:///sample_sorenson.mov Idx File: //.aiur/.sample_sorenson.mov.aidx Seekable : Yes Size(byte): 82395 '''ERROR: from element /GstPipeline:pipeline0/GstAiurDemux:aiurdemux0: GStreamer encountered a general stream error.''' Additional debug info: ../../../../../src/src/parser/aiur/src/aiurdemux.c(4332): aiurdemux_pull_task (): /GstPipeline:pipeline0/GstAiurDemux:aiurdemux0: streaming stopped, reason error, state 1 ERROR: pipeline doesn't want to preroll. Setting pipeline to NULL ... Freeing pipeline ... [--->FINALIZE aiurdemux
Multi-Display
Multiple video playback on multiple screens (one video per screen).
2x1080p video + capture
Video 1 to monitor 1
gst-launch filesrc location= <VIDEO1> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink device=/dev/video16 &
Video 2 to monitor 2
gst-launch filesrc location= <VIDEO2> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink device=/dev/video18 &
Video capture to monitor 3
gst-launch mfw_v4lsrc ! mfw_v4lsink device=/dev/video20 &
3x720p video
Video 1 to monitor 1
gst-launch filesrc location= <VIDEO1> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink device=/dev/video16 &
Video 2 to monitor 2
gst-launch filesrc location= <VIDEO2> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink device=/dev/video18 &
Video 3 to monitor 3
gst-launch filesrc location= <VIDEO3> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_v4lsink device=/dev/video20 &
Multi-Overlay
The plugin used to perform overlays in iMX6 based boards is the mfw_isink which uses the IPU capabilities to perform the overlay.
In iMX6 processors, the unit taking care of the image processing is the IPU (image processing unit). The goal of this unit is to provide support for the flow of data from an image sensor and/or to display device. This support covers: connectivity to cameras, displays, graphics accelerators, TV, encoders and decoders, image processing and manipulation synchronization and control.
Each IPU unit has 2 Display interfaces and the iMX6Q comes with 2 IPU units so up to 4 Displays can be connected to it. To every display a Background (BG) and a Foreground or overlay (FG) channel can be assigned but each of the IPU units have only one overlay channel and for that reason only one of the displays attached to a single IPU is going to have both the BG and FG channels and the other one get only a FG channel. This unit automatically combines the BG and FG channels so you can have an overlay in your display.
When several displays are connected to the unit the first display to be configured for a desired unit gets both the FG and BG channels and the other one only the BG channel. In our current kernel configuration for Boundary devices boards, the HDMI and LDB displays are connected to the same IPU and the LCD to the other one so if in your configuration you have a HDMI and a LDB display only one of them is going to have the overlay channel.
In Ridgerun SDK, the order of configuration of the attached displays is HDMI, LDB and LCD so if for example you have all of them attached to the board at the same time the IPU channels assignation is as follow.
- HDMI: gets a BG and a FG channel as it is the first one in IPU0.
- LDB : only gets a BG channel as it is in the same IPU as the HDMI and the FG was assigned to it.
- LCD : gets BG and FG channels as it is the only one in IPU1 and no one has the FG channel.
The frame buufer assignation would be as follows:
- HDMI : /dev/fb0 (background) and /dev/fb1 (overlay)
- LDB : /dev/fb2 (background)
- LCD : /dev/fb3 (background) and /dev/fb4 (overlay)
As can be seen, the frame buffer assignation is pretty straight forward, first channel gets first frame buffer and so on.
Now, as mentioned before, the plugin working with the overlay buffers is the mfw_isink which uses a configuration file called vssconfig (located in /usr/share) to determine what frame buffer is going to be used. Its structure is as follows:
[configuration name] type = framebuffer format = RGBP fb_num = <overlay frame buffer number> main_fb_num = <main frame buffer number (BG associated to the overlay buffer)> vsmax = <maximum number of video surfaces>
So for example, for the HDMI monitor (with both BG and FG channels in fb0 and fb1) the configuration would be:
[hdmi] type = framebuffer format = RGBP fb_num = 1 main_fb_num = 0 vsmax = 8
Currently the maximum vsmax allowed is 8 and that number means that you can have maximun 8 different overlays (lets say videos) at the same time.
and to use that configuration, an example pipeline would be:
gst-launch videotestsrc ! mfw_isink display=hdmi
which makes and overlay between the videotestsrc and whatever is in fb0.
Ridgerun SDK comes configured to use any combination of the supported displays, if you take a look at the /usr/share/vssconfig file you'll see 4 different configurations: mon1, mon2, mon2.2, and mon3. All of them cover the possible frame buffer configurations when 1, 2 or 3 displays are attached at the same time.
Now you'll see some pipeline examples using this plugin for multy overlay (Several video playbacks on a single screen).
3 720p video overlay on HDMI monitor
To select first monitor use the following command.
DISP=mon1
Note: for the other monitors use.
- Second monitor: mon2
- Third monitor: mon3
Video 1 in left corner
gst-launch filesrc location= <VIDEO1> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=$DISP axis-left=0 axis-top=0 disp-width=320 disp-height=240 &
Video 2 in the middle
gst-launch filesrc location= <VIDEO2> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=$DISP axis-left=340 axis-top=0 disp-width=320 disp-height=240 &
Video 3 in right corner
gst-launch filesrc location= <VIDEO3> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=$DISP axis-left=680 axis-top=0 disp-width=320 disp-height=240 &
Multi-Display and Multi-Overlay
As explained in the previous section, the user can select the monitor to be used by specifying the appropriate configuration, here are some examples on how to use the overlay channels to perform a multi-overlay in multiple displays (several video playbacks on multiple screens).
The following pipelines reproduce 3 videos on the first monitor and 2 videos and 1 capture on the second monitor with a total of 6 video playbacks in 2 monitors.
First Monitor
Left
gst-launch filesrc location= <VIDEO1> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=mon1 axis-left=0 axis-top=0 disp-width=320 disp-height=240 &
Center
gst-launch filesrc location= <VIDEO2> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=mon1 axis-left=340 axis-top=0 disp-width=320 disp-height=240 &
Right
gst-launch filesrc location= <VIDEO3> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=mon1 axis-left=680 axis-top=0 disp-width=320 disp-height=240 &
Second Monitor
Left
gst-launch filesrc location= <VIDEO4> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=mon2 axis-left=0 axis-top=0 disp-width=320 disp-height=240 &
Capture
gst-launch mfw_v4lsrc ! mfw_isink display=mon2 axis-left=340 axis-top=0 disp-width=320 disp-height=240 &
Right
gst-launch filesrc location= <VIDEO5> typefind=true ! aiurdemux ! queue ! vpudec ! mfw_isink display=mon2 axis-left=680 axis-top=0 disp-width=320 disp-height=240 &
Audio Playback
MultiFormat audio Playback
This pipeline decodes the formats supported by the beepdec decoder.
Formats: mpeg, AAC and vorbis.
gst-launch filesrc location= <audiotest> ! mpegaudioparse ! beepdec ! audioconvert ! alsasink
Audio Encoding
WAV to mp3
gst-launch filesrc location= test.wav ! wavparse ! mfw_mp3encoder ! filesink location= test.mp3
tested: 20130601
/ # gst-launch filesrc location=sample14.wav ! wavparse ! mfw_mp3encoder ! filesink location= test.mp3 BLN_MAD-MMCODECS_MP3E_ARM_02.02.00_ARM12 build on Mar 21 2012 17:13:02. MFW_GST_MP3_ENCODER_PLUGIN 3.0.5 build on May 23 2013 16:08:41. Setting pipeline to PAUSED ... Pipeline is PREROLLING ... '''ERROR: CODEC ERROR code 0x192, please consult Freescale codec team for more information''' Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Got EOS from element "pipeline0". Execution ended after 5921001 ns. Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...
Note: test.wav file is transcode/encode to test.mp3 file successfully
RTP Video Streaming
These instructions show how to do video streaming over the network, a video will be played on the board and viewed on the host. These pipelines use the default port (4951) to send the packets, if you want to change the port number, you have to add the port capability to the udpsink.(e.g udpsink port=$PORT host=$CLIENT_IP)
Stream H.264 video test pattern over RTP
- Server: iMX6
CLIENT_IP=10.251.101.58 gst-launch videotestsrc ! vpuenc codec=6 ! queue ! h264parse ! rtph264pay ! udpsink host=$CLIENT_IP -v
tested: 20130531
This pipeline is going to print the capabilities of each element's pad thanks to the -v option. The pipeline should print something similar to this output:
. . . /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, width=(int)320, height=(int)240, framerate=(fraction)30/1 Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstVpuEnc:vpuenc0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)NV12, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, width=(int)320, height=(int)240, framerate=(fraction)30/1 [INFO] chromaInterleave 1, mapType 0, linear2TiledEnable 0 /GstPipeline:pipeline0/GstVpuEnc:vpuenc0.GstPad:src: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true /GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au /GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:sink: caps = video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)30/1, framed=(boolean)true, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z0JAFKaBQfkA\\,aM4wpIAA\", payload=(int)96, ssrc=(uint)87645921, clock-base=(uint)1548379595, seqnum-base=(uint)847 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: timestamp = 1548379595 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: seqnum = 847 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z0JAFKaBQfkA\\,aM4wpIAA\", payload=(int)96, ssrc=(uint)87645921, clock-base=(uint)1548379595, seqnum-base=(uint)847 . . .
You need the udpsink:sink capabilities for the client pipeline.
- Client: Ubuntu PC
Copy the udpsink caps given by the server pipeline, erase the spaces and the (uint) cast.
CAPS=application/x-rtp, media=(string)video,clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"Z0JAFKaBQfkA\,aM4wpIAA",payload=(int)96,ssrc=87645921, clock-base=1548379595,seqnum-base=847 PORT=4951 gst-launch udpsrc port=$PORT ! $CAPS ! rtph264depay ! queue ! ffdec_h264 ! xvimagesink sync=false -v
tested: 20130531
Stream MPEG4 encoded video file over RTP
These pipelines use a video file and send it over the network. Here you can use any file encoded in MPEG4.
- Server: iMX6
CLIENT_IP=10.251.101.58 FILE=bbb_twomin_1080p.avi gst-launch filesrc location=$FILE typefind=true ! aiurdemux ! queue ! mpeg4videoparse ! rtpmp4vpay ! udpsink host=$CLIENT_IP -v
tested: 20130531
As before, you need the udpsink:sink capabilities for the client pipeline.
- Client: Ubuntu PC
Copy the udpsink caps given by the server pipeline, erase the spaces and the (uint) cast.
CAPS=application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)MP4V-ES, profile-level-id=(string)1,config=(string)000001b001000001b58913000001000000012000c48d8800c53c04871463000001b24c61766335312e34342e30, payload=(int)96,ssrc=1303818908,clock-base=1090442294,seqnum-base=(uint)63287 PORT=4951 gst-launch udpsrc port=$PORT ! $CAPS ! rtpmp4vdepay ! queue ! ffdec_mpeg4 ! xvimagesink sync=false -v
tested: 20130531
Stream H.264 encoded video capture over RTP
These pipelines use a video capture and send it over the network.
- Server: iMX6
CLIENT_IP=10.251.101.58 gst-launch mfw_v4lsrc capture-mode=5 fps-n=15 ! vpuenc codec=6 ! queue ! rtph264pay ! udpsink host=$CLIENT_IP -v
As before, you need the udpsink:sink capabilities for the client pipeline.
- Client: Ubuntu PC
Copy the udpsink caps given by the server pipeline, erase the spaces and the (uint) cast.
CAPS=application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"Z0JAKKaAeAIn5UAA\,aM4wpIAA", payload=(int)96, ssrc=121555752, clock-base=1624542567, seqnum-base=15553 PORT=4951 gst-launch udpsrc port=$PORT ! $CAPS ! rtph264depay ! queue ! ffdec_h264 ! xvimagesink sync=false -v