Why RidgeRun Loves GStreamer: Difference between revisions

From RidgeRun Developer Wiki
Line 79: Line 79:
= Additional information =
= Additional information =


The RidgeRun developer wiki contains [[https://developer.ridgerun.com/wiki/index.php/Category:GStreamer many GStreamer articles and example pipelines]] for various SoCs.
The RidgeRun developer wiki contains [https://developer.ridgerun.com/wiki/index.php/Category:GStreamer many GStreamer articles and example pipelines] for various SoCs.




[[Category:GStreamer]]
[[Category:GStreamer]]

Revision as of 16:29, 30 January 2014

Introduction

In a recent customer call, we created a simplified diagram for the GStreamer pipeline RidgeRun implemented for the customer. When I looked at the diagram, I couldn't imagine how the customer could have gotten similar funcationality without using GStreamer. It would have been a nightmare to try to support so much functionality if you had to write it all from scratch or use strange video APIs that are SoC specific.

This got me thinking why RidgeRun loves GStreamer and I thought I would share my thoughts with you.

Real customer pipeline

Here is the GStreamer pipeline being used by a customer. The only difference is I also added RTSP streaming out the device (bottom left GStreamer bin).

Customer GStreamer pipeline

A mux is used to select the video source, since the SoC being used by the customer can only handle one video source at a time. (Some chips, like iMX6 and DM81xx can handle several video streams at one time). There are several sources of video, from the network, from a file, from a built-in camera, from an external device using Transport Stream over UDP.

There are several destinations for the streaming video, all of which can be active at the same time. The video can be displayed on an LCD on the customer's hardware, high resolution snapshots and/or video saved to the a storage device on the customer's product, or streamed over the network using different protocols (and different resolutions if needed).

Separating control application from video streaming application

You can think of the above GStreamer pipeline being implemented in a separate application, which we call the Streaming Media Server (sms). The customer's application simply instructs sms with simple commands, like start streaming, mux selection, take a snapshot, etc. There is no confusion or complexity by mixing the customer's proprietary application with the Streaming Media Server.

Another advantage for using a Streaming Media Server is testing. Since the Streaming Media Server is designed to be controlled by an external process, it is very easy to create automated acceptance tests and automated endurance tests.

You can think of the Streaming Media Server as providing a remote API (using one of the interprocess communication mechanisms, such as D-Bus or even a simple TCP port). Often RidgeRun will provide a small libsms library which is linked with customer's application. libsms provides a real API to the customer's application and hides all the logic on how libsms makes the interprocess communication call.

For the GStreamer pipeline above, you can guess some of the key remote APIs:

  • Start / stop stream
  • Video input select
  • Source video filename
  • Take a snapshot (providing a filename if desired)
  • Save video to file (again providing a filename if desired)
  • Set source RTSP video IP address

There can be many more, relating to setting frame rate, resolution, etc. This list is just to give you a feel for how the customer's application interacts with the Streaming Media Server.

If you want to see a real Streamng Media Server example, complete with source code, check out GStreamer Daemon (which is the Streaming Media Server) and gst-client (which is a command line program which is this dicussion is either the customer's application or the automated acceptance test / automated endurance test application).

Getting the hardest part working first

On one audio streaming products I worked on years ago used a proprietary streaming media framework directly linked to our custom application. Late in the development cycle we uncovered some key technical issues. This was because we couldn't test the audio until we had most of custom application written and debugged. The lesson I learned was using a software design approach that allows you to solve the hardest problem first, so you know how much trouble you are in as early as possible.

Generally RidgeRun starts by developing all the key GStreamer pipelines the customer will be using. Using the diagram above, for example, we would create a handful of pipelines, without the mux or tee, to verify we can perform all the key capture, decode, encode, and output needed. The we would make the pipeline more complex (say one video input, with tee and all video outputs). Finally we would get the entire pipeline in place. Often this can be done without writing any code - we just use gst-launch and GStreamer Daemon.

System debugging

If you have the customer's control logic and the streaming media framework all in one application, debugging that process can be challanging. Instead, if you have a Streaming Media Server that is separate, you can validate and debug sms independent of the control application. There are many, many tools and techinques for debugging GStreamer applications.

System tuning

Once the Streaming Media Server is working, we can very it works properly under load. We need to tune the GStreamer pipeline. We often uncover three different classes of problems

  • dropped audio,
  • slow video framerate and
  • excessive latency.

A common example, but not for this customer since the device doesn't support audio, is audio drop under heavy customer application load. This is often caused by the processor not giving timely attention to the audio portion of the GStramer pipeline. Basically, we need to increase the thread priority for the audio handling. We can use a GStreamer element that allows us to set the audio thread priority to resolve the contention for the processor.

Slow video framerate is almost always caused by video frames being dropped. for example, if you look at the differences in timestamps for the video frames, you might see 33ms, 33ms, 33ms, 66ms, 33ms, 33ms, 33ms, 66ms, ... Instead of the 30 fps you expect, maybe you get 24 fps. Often it is assumed that if the framerate is reported as 24 fps, that the difference in timestamps for the video frames is similar for all frames, and is just to large. This is seldom the case. Most often, a video frame is ready, but there is no buffer available, so the frame is discarded. By identifying where the the frame is being dropped, and adjusting the number of buffers circulating in that part of the pipeline, the slow framerate issue can be resolved.

Latency is often caused by buffering performed by the network streaming part of the system. Even measuring latency can be tricky. To improve jitter handling, the video stream is buffered. However, for any live video, this buffering adds unacceptable latency. Improving latency often involves identifing where the excess buffering is occurring and reducing the number of buffers being used. Latency can also be effected by the I-frame rate, as I-frames are larger and effect the buffer usage on the receiver's end.

Adding functionality

Once a customer gets their product to market and their customers start providing feedback, it is common for RidgeRun to be asked to add audio / video functionality, like:

Often these requests are satisified by adding an element to the GStreamer pipeline. As you can see if you followed the hyperlinks in the bulleted list, RidgeRun has developed custom GStreamer elements, some of which use the SoC's hardware accelerators, to add the requested functionality.

SoC independence

We have a long time customer, who has used around four different SoCs over the years as their product line grew. This customer uses one one git repository, containing the RidgeRun professional SDK, plus arch support for the different SoCs and mach support for the different hardware designs. The user configures the RR SDK for the SoC/hardware they are working on and build the code image. The customer's application can be used on their different SoCs/hardware. The reason this is possible is because the streaming media framework is GStreamer. They use different GStreamer pipelines based on the SoCs/hardware. However, the overall control application can stay the same. They can use any SoCs that have GStreamer support.

Contrast this to another RidgeRun customer that choose a SoC that didn't support GStreamer. I discussed with the customer about having RidgeRun develop a hardware accelerated GStreamer plug-in for the SoC. This was felt to be not viable. Eighteen months later, the customer has their product working pretty well, but uncovered some issues. They are finding the SoC vendor's support so expensive and unhelpful, they are looking to switch to another SoC. All of their custom application logic that used the strange proprietary SoC vendor's streaming media API will all have to be rewritten. If that SoC had supported GStreamer, the user would just need to modify the GStreamer pipelines for the SoC they are moving to. The good news for this customer is the SoC they are looking at indeed support GStreamer.

Additional information

The RidgeRun developer wiki contains many GStreamer articles and example pipelines for various SoCs.