LeopardBoard 368 1080p support

From RidgeRun Developer Connection
Jump to: navigation, search

Introduction

One of the great features of the DM368 processor is support for HD 1080P video resolution. RidgeRun offers a pre-built bootable SD card image you can use or you can configure your RidgeRun evaluation or professional SDK to build the image yourself.

You need to be using a DM368 with the DM368 DVSDK as the DM365 doesn't support 1080p.

This wiki has following test scenarios with configuration and gstreamer pipelines:

  • Capture,record,playback and stream at 1080p.
  • DM368 TFP410 1080p Video decoding.
  • Capture from the camera at 1080p, MJPEG Encoding, H264 Encoding, Live preview while saving to a file and Performance notes.
  • RidgeRun RTSP Server: rr-rtsp-server demo setup for 1080p streaming.

HOST PC Setup for Testing Streaming Pipelines

You need GStreamer packages installed on your Ubuntu desktop.

sudo apt-get install gstreamer0.10-alsa gstreamer1.0-libav gstreamer0.10-fluendo-mp3 gstreamer0.10-plugins-bad 
sudo apt-get install gstreamer0.10-plugins-base gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly 
sudo apt-get install gstreamer0.10-tools gstreamer0.10-x

1. Capture,record,playback and stream at 1080p

Hardware needed

Command to run to view the component output on the display monitor

Setting the component output port of the leoboard to 1080I_30 component output:

After boot process completes, please run below command on the board prompt:

echo 1080I-30 > /sys/class/davinci_display/ch0/mode

Live preview

Video from MT9P031 CMOS camera sensor is sent out component output signals.

gst-launch -v --gst-debug-level=2 v4l2src input-src=camera chain-ipipe=true always-copy=false num-buffers=2000 ! \
           capsfilter caps='video/x-raw-yuv,format=(fourcc)NV12,width=1920,height=1088' ! dmaiaccel ! \
           dmaiperf print-arm-load=true ! TIDmaiVideoSink enable-last-buffer=false videoStd=1080I_30 

Record to a file with H264 encoding and QT muxer

Video from MT9P031 CMOS camera sensor is H264 encoded and wrapped in QT (MP4) muxer and saved to file.

H264ENC_PARMS="encodingpreset=2 ratecontrol=2 intraframeinterval=23 idrinterval=46 targetbitrate=3000000 \
               profile=100 level=50 entropy=1 t8x8inter=true t8x8intra=true single-nalu=true"
VIDEO_FILE=h264file.mp4

gst-launch -e --gst-debug-level=2 v4l2src always-copy=FALSE input-src=camera chain-ipipe=true num-buffers=1500 ! \
           capsfilter caps='video/x-raw-yuv,format=(fourcc)NV12,width=1920,height=1088' ! dmaiaccel ! \
           dmaienc_h264 $H264ENC_PARMS ! queue ! dmaiperf print-arm-load=true ! qtmux ! queue ! /
           filesink location=$VIDEO_FILE sync=false enable-last-buffer=false 

Play previously recorded file

Play a file to composite output signals that was recorded using the pipeline shown in the previous section.

VIDEO_FILE=h264file.mp4

gst-launch filesrc location=$VIDEO_FILE ! qtdemux ! queue ! dmaidec_h264 numOutputBufs=8 closedloop=1 ! queue ! \
           dmaiperf ! 'video/x-raw-yuv, format=(fourcc)NV12' ! \
           TIDmaiVideoSink videoOutput=component videoStd=1080I_30 enable-last-buffer=false

Live preview while recording

Use case is a device that is capturing video with a camera, having an HD Monitor (HD 1080p TV) showing a live preview of the video being captured, along with the video being saved to file. A simple gst-launch type command is shown below. For a real product, you might want to consider using GStreamer daemon so that you can start and stop the recording.

H264ENC_PARMS="encodingpreset=2 ratecontrol=2 intraframeinterval=23 idrinterval=46 targetbitrate=3000000 \
               profile=100 level=50 entropy=1 t8x8inter=true t8x8intra=true single-nalu=true"
VIDEO_FILE=h264file.mp4

gst-launch -e v4l2src input-src=camera chain-ipipe=true always-copy=false num-buffers=1500 ! \
           capsfilter caps='video/x-raw-yuv,format=(fourcc)NV12,width=1920,height=1088' ! dmaiaccel ! \
           tee name=t1 ! TIDmaiVideoSink enable-last-buffer=false videoStd=1080I_30 videoOutput=COMPONENT \
           t1. ! queue ! dmaienc_h264 $H264ENC_PARMS ! queue ! dmaiperf print-arm-load=true ! \
           qtmux ! queue ! filesink location=$VIDEO_FILE sync=false enable-last-buffer=false

1080P RTSP streaming

On Leopardboard368 run the following pipeline:

H264ENC_PARMS="encodingpreset=2 ratecontrol=2 targetbitrate=2000000 intraframeinterval=30 idrinterval=90 \
               bytestream=true"

gst-launch v4l2src input-src=camera always-copy=false ! \
           capsfilter caps='video/x-raw-yuv, width=1920, height=1088, framerate=(fraction)30/1' ! dmaiaccel ! \
           dmaiperf ! dmaienc_h264 $H264ENC_PARMS ! queue ! mpegtsmux ! "video/mpegts, mapping=/stream" ! rtspsink

OR

Please try below combination of pipelines if pipelines mentioned above causing any issues because of 'mpegtsmux'. (This pipeline is tested using VLC Player at host PC)

H264ENC_PARMS="encodingpreset=2 ratecontrol=2 targetbitrate=2000000 intraframeinterval=30 idrinterval=90"
                                        

gst-launch v4l2src input-src=camera always-copy=false ! capsfilter caps='video/x-raw-yuv, width=1920, height=1088, framerate=(fraction)30/1' ! dmaiaccel ! d
maiperf ! dmaienc_h264 $H264ENC_PARMS ! "video/x-h264, mapping=/stream" ! rtspsink

You can test for 720P Video Streaming by setting width=1280,height=720 and framerate=(fraction)23/1'in the above pipeline. (host side pipeline remains the same).

On the host you can use VLC to browse to the Leopardboard368.

LEO_IP_ADDR=192.168.0.1 # replace with the IP address of your target hardware

vlc rtsp://$LEO_IP_ADDR/stream

OR

mplayer rtsp://$LEO_IP_ADDR/stream

OR

openRTSP rtsp://$LEO_IP_ADDR/stream  

(You may have to run 'sudo apt-get install livemedia-utils' to work with 'openRTSP')

If you rather, you can run GStreamer on your host PC.

gst-launch rtspsrc location="rtsp://$LEO_IP_ADDR/stream" ! rtpmp2tdepay ! mpegtsdemux ! queue ! ffdec_h264 ! \
           xvimagesink sync=false

SDK Configuration

1. Select the MT9P031 driver to support your camera module

Prompt: mt9p031 support
Location:
-> Kernel configuration
-> Device Drivers
-> Multimedia support (MEDIA_SUPPORT [=y])
-> Video capture adapters (VIDEO_CAPTURE_DRIVERS [=y])

2. Configure your SDK in order to be able to capture at 1080p as is shown below:

Architecture configurations
Video Output (Component) --->
Component Standard (720P) --->
Maximum Video Output Buffer Size (1080I-30) --->
Maximum Video Input Buffer Size (1080P) --->

3. Increase the reserved memory and the CMEM space to allocate the 1080p buffers.

Proprietary Software
[*] Reserve memory from the kernel for codec engine and friends automatically
(0X4FCA000) Amount of reserved memory from the kernel
[*] Automatically setup CMEM at the beggining of the reserved memory area
(0X35CA000) Amount of reserved memory for cmemk 

4. If your target format is H264 and you are using an EVAL version of the SDK you will need to do the following modification into you MT9P031 driver:

Index: kernel/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c
===================================================================
--- kernel.orig/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c 2012-08-10 09:44:39.763840232 -0700
+++ kernel/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c 2012-08-10 09:45:20.195126497 -0700
@@ -204,7 +204,7 @@
},
{
/* 1080P-HDTV */
- .framesize = { 1920, 1080},
+ .framesize = { 1920, 1088},
.frameintervals = {
{ .numerator = 1, .denominator = 30 },
},

This modification will allow use 1080p with the H264 encoder.

5. Re-built your SDK and install it into your board.

6. Setting the component output port of the leoboard to 1080I_30 component output:

After boot process completes, please run below command on the board prompt:

echo 1080I-30 > /sys/class/davinci_display/ch0/mode

7.We use 'rtspsink' element for RTSP streaming in this setup.So please check whether 'rtspsink' element is properly installed under target board using the command 'gst-inspect rtspsink'

2. DM368 TFP410 1080p Video decoding

This section contains test details for 1080p Video Decoding. Input to the LI-DVI1 add on card is the 1080p video file.

Hardware needed

Disabling the framebuffer before running the Pipelines

Please run below command on the board prompt to disable the framebuffer:

fbset -disable

Display video test pattern

gst-launch -v videotestsrc ! video/x-raw-yuv,width=1920,height=1080 ! TIDmaiVideoSink videoStd=1080P_25 videoOutput=DVI accelFrameCopy=FALSE sync=false

Decode and play movie

FILE=big_buck_bunny_1080p_h264.mov

gst-launch -e filesrc location=$FILE ! qtdemux ! queue ! dmaidec_h264 numOutputBufs=8 ! queue ! dmaiperf ! TIDmaiVideoSink accelFrameCopy=true videoOutput=DVI videoStd=1080P_25

You need to first store the movie file on the SD card or use an NFS mount if you prefer. We tested using the Big Buck Bunny movie.

Decode and display a picture

FILE=my_picture.jpg

gst-launch filesrc location=$FILE ! dmaidec_jpeg ! freeze ! TIDmaiVideoSink videoOutput=DVI videoStd=1080P_25 

SDK Configurations for 1080p output through LI-DVI1

Architecture configurations ---> 
Video Output ---> PRGB
Maximum Video Output Buffer Size (1080I) --->
Maximum Video Input Buffer Size (1080P) --->
Proprietary Software ---->
Amount of reserved memory from the Kernel -> 0X4FCA000
Amount of reserved memory for cmemk -> 0X35CA000
Kernel configuration ->
Extra kernel arguments dm365_generic_prgb_encoder.mode=1920x1080MR-16@60

3. Capture from the camera at 1080p, MJPEG Encoding, H264 Encoding, Live preview while saving to a file and Performance notes

Hardware needed

MJPEG Encoding

The following pipeline will capture a 1080p image and will encode it in JPEG to be saved to a file.

 IMAGE_FILE=image.jpeg

 gst-launch -e v4l2src num-buffers=1 always-copy=false ! 'video/x-raw-yuv,format=(fourcc)NV12, width=1920,height=1080' ! queue ! dmaiaccel ! dmaienc_mjpeg ! filesink location=$IMAGE_FILE

H264 Encoding

The following pipeline captures 1080p video and encode it in H264 to be saved into a file.

VIDEO_FILE=h264file.mp4

gst-launch -e v4l2src always-copy=false ! 'video/x-raw-yuv,format=(fourcc)NV12, width=1920,height=1088' ! queue ! dmaiaccel ! dmaienc_h264 encodingpreset=2 ratecontrol=2 ! qtmux ! filesink location=$VIDEO_FILE

Due to codec limitations it is necessary to use 1920x1088 image size instead of 1920x1080. A specific modification must me done to accomplish this feature in the EVAL version (See previous section).

Live preview while saving to a file

Use case is a device that is capturing video with a camera, having an HD Monitor (HD 1080p TV) showing a live preview of the video being captured, along with the video being saved to file. A simple gst-launch type command is shown below. For a real product, you might want to consider using GStreamer daemon so that you can start and stop the recording.

 
H264ENC_PARMS="encodingpreset=2 ratecontrol=2 intraframeinterval=23 idrinterval=46 targetbitrate=3000000 profile=100 level=50 entropy=1 t8x8inter=true t8x8intra=true single-nalu=true"

gst-launch -e v4l2src input-src=camera chain-ipipe=true always-copy=false num-buffers=3000 ! 'video/x-raw-yuv,format=(fourcc)NV12,width=1280,height=720' ! dmaiaccel ! \ 
               tee name=t1 ! TIDmaiVideoSink enable-last-buffer=false videoStd=720P_60 videoOutput=COMPONENT \
            t1. ! queue ! dmaienc_h264 $H264ENC_PARMS ! queue ! dmaiperf print-arm-load=true ! qtmux ! queue ! \
                         filesink location=test.mp4 sync=false enable-last-buffer=false

Performance notes

For this example, testing was done with a LeopardBoard 368 and Aptina MT9P031 sensor (LI-5M03). We followed the GStreamer pipeline tuning steps, changing one part of the pipeline at a time so you can see the results of each change. We also ran into some problems so we included some of the GStreamer Debugging steps we followed.

The first simple pipeline that was tried ( v4l2src -> fakesink ):

CAPS='video/x-raw-yuv,format=(fourcc)NV12, width=1920,height=1080'
gst-launch -e v4l2src num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiperf print-arm-load=true ! fakesink

The framerate is around 5.5 fps and the ARM CPU load is 100%. When you see the CPU load spike, it is due to an unwanted frame buffer copy. We avoid that using dmaiaccel element:

gst-launch -e v4l2src  num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel ! dmaiperf print-arm-load=true ! fakesink

The framerate is around 15.5 fps and the ARM CPU load is around 8%. We fixed the CPU load, but the framerate is too low. The problem is a buffer is not available when V4L2 needs it. So we add some more buffers (requires change in memory map and kernel command line parameters):

gst-launch -e v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel  ! dmaiperf print-arm-load=true ! fakesink

The framerate is 30 fps and the ARM CPU load is around 8%. Good, now we have video frames flowing properly so we can add more elements to the pipeline. Let's start with H.264 encoding:

gst-launch -e v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel ! dmaienc_h264 ! qtmux ! dmaiperf print-arm-load=true ! fakesink

Pipeline got stuck. I wonder why. Let's enable some debug:

gst-launch -e -v v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel ! dmaienc_h264 ! qtmux ! dmaiperf print-arm-load=true ! fakesink

With some of the output being:

 ../../src/src/gsttividenc1.c(343): gstti_videnc1_process (): /GstPipeline:pipeline0/dmaienc_h264:dmaienc_h2640: failed to encode video buffer

meaning the H.264 hardware encoder wasn't happy. Let's set the DMAI element specific debug output:

gst-launch --gst-debug-help # list what debug is available
gst-launch --gst-debug=tidmai*:5 v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel ! dmaienc_h264 ! qtmux ! dmaiperf print-arm-load=true ! fakesink

Now lets add some H.264 encoder parameters:

H264_PARMS="encodingpreset=2 ratecontrol=2 single-nalu=true targetbitrate=12000000 maxbitrate=12000000"

How about an MJPEG pipeline:

gst-launch -e v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! dmaiaccel ! dmaienc_mjpeg ! dmaiperf print-arm-load=true ! fakesink

Notice the ARM load is low and the frame rate dropped. That means we can let the ARM work on more than one part of the pipeline at the same time. This is done using the GStreamer queue element:

gst-launch -e v4l2src queue-size=5 num-buffers=100 always-copy=false ! capsfilter caps="$CAPS" ! queue ! dmaiaccel ! dmaienc_mjpeg ! dmaiperf print-arm-load=true ! fakesink
Frame rate
(fps)
ARM CPU load Simplified pipeline
(real pipelines shown above)
5.5 100% v4l2src -> fakesink
15.5 8% v4l2src -> dmaiaccel -> fakesink
30 8% v4l2src queue-size=5 -> dmaiaccel -> fakesink
21.5 8% v4l2src queue-size=5 -> dmaiaccel -> dmaienc_mjpeg ! fakesink
21.5 8% v4l2src queue-size=5 -> queue -> dmaiaccel -> dmaienc_mjpeg ! fakesink

SDK Configuration

1. Select the MT9P031 driver to support your camera module

      Prompt: mt9p031 support
      Location:                                                                                                 
       -> Kernel configuration                                                                                    
         -> Device Drivers                                                                                        
           -> Multimedia support (MEDIA_SUPPORT [=y])                                                             
             -> Video capture adapters (VIDEO_CAPTURE_DRIVERS [=y])

2. Configure your SDK in order to be able to capture at 1080p as is shown below:

     Architecture configurations
           Video Output (Component)  --->                                                   
           Component Standard (1080I-30)  --->                                          
           Maximum Video Output Buffer Size (720P)  --->                            
           Maximum Video Input Buffer Size (1080P)  --->

3. Increase the reserved memory and the CMEM space to allocate the 1080p buffers.

    Proprietary Software
        [*] Reserve memory from the kernel for codec engine and friends automatically          
        (0x2bf7000) Amount of reserved memory from the kernel                     
        [*] Automatically setup CMEM at the beggining of the reserved memory area     
        (0x17f7000) Amount of reserved memory for cmemk 

4. If your target format is H264 and you are using an EVAL version of the SDK you will need to do the following modification into you MT9P031 driver:

Index: kernel/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c
===================================================================
--- kernel.orig/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c	2012-08-10 09:44:39.763840232 -0700
+++ kernel/linux-2.6.32.17-psp03.01.01.39/drivers/media/video/mt9p031.c	2012-08-10 09:45:20.195126497 -0700
@@ -204,7 +204,7 @@
     },
     {
     /* 1080P-HDTV */
-    .framesize = { 1920, 1080},
+    .framesize = { 1920, 1088},
     .frameintervals = {
         {  .numerator = 1, .denominator = 31 },
         },

This modification will allow use 1080p with the H264 encoder.

5. Re-built your SDK and install it into your board.

4. RidgeRun RTSP Server: rr-rtsp-server demo setup for 1080p streaming and GStreamer pipelines

Hardware needed

RidgeRun's DM368 SDK with TI DVSDK 4.02.00.06 that includes H.264 Baseline/Main/High Profile Encoder on DM365/DM368 with support for resolutions up to 4096x4096.

In order to use rr-rtsp-server demo, you need properly configure the SDK and add the GStreamer pipeline used by rr-rtsp-server.

720P Video Only

In order to use rr-rtsp-server demo, run make config and select:

-> User Applications  -> [*] RTSP Server example

-> Architecture configurations                     
 -> Component Standard(720P-60) 
 -> Maximum Video Output Buffer Size
          -> value: 720P
 -> Maximum Video Input Buffer Size
          -> value: 720P
-> Proprietary configurations                     
 (0x3b00000)Amount of Reserved memory from kernel
 (0x3200000)Amount of Reserved memory for cmemk

Add the GStreamer pipeline show below to the rr-rtsp-server.init file under $DEVDIR/myapps/rr-rtsp-server

/usr/bin/rr_rtsp_server  "( v4l2src always-copy=false chain-ipipe=false ! video/x-raw-yuv, format=(fourcc)NV12, width=1280, height=720 !  dmaiaccel ! queue ! dmaienc_h264 encodingpreset=2 ratecontrol=4 ! queue ! rtph264pay pt=96 name=pay0 )" &

720P Video + Audio

Use the same SDK configuration for 720P video only.

Add the below gstreamer pipeline in the rr-rtsp-server.init file under $DEVDIR/myapps/rr-rtsp-server

/usr/bin/rr_rtsp_server  "( v4l2src always-copy=false chain-ipipe=false ! video/x-raw-yuv, format=(fourcc)NV12, width=1280, height=720 ! dmaiaccel ! queue ! dmaienc_h264 encodingpreset=2 ratecontrol=4 ! queue ! rtph264pay pt=96 name=pay0 alsasrc buffer-time=800000  latency-time=30000 ! audio/x-raw-int, rate=22050 ! queue ! dmaienc_aac outputBufferSize=131072 bitrate=128000 ! queue ! rtpmp4gpay name=pay1 )" &

1080P Video Only

In order to use rr-rtsp-server demo, run make config and select:

-> User Applications  -> [*] RTSP Server example
-> Architecture configurations                     
 -> Component Standard(720P-60) 
 -> Maximum Video Output Buffer Size
          -> value: 720P
 -> Maximum Video Input Buffer Size
          -> value: 1080P
-> Proprietary configurations                     
 (0x3200000)Amount of Reserved memory from kernel
 (0x1E00000)Amount of Reserved memory for cmemk

Add the below gstreamer pipeline in the rr-rtsp-server.init file under $DEVDIR/myapps/rr-rtsp-server

/usr/bin/rr_rtsp_server "( v4l2src always-copy=false chain-ipipe=false ! video/x-raw-yuv,format=(fourcc)NV12, width=1920, height=1088 ! dmaiaccel ! queue ! dmaienc_h264 encodingpreset=2 ratecontrol=4 ! queue ! rtph264pay pt=96 name=pay0 )" &

Viewing streaming video using a web browser

For Testing above pipelines using the browsers (IE or Chrome ) Embedded Server(lighttpd) should be configured in the leopardboard side, and RTSP video streaming should be invoked by creating Embedded Server WebPages. (It needs an application framework to be developed)