GstShark

From RidgeRun Developer Connection
Jump to: navigation, search

RR Sponsor.jpg

Introduction

Overview Video

GstShark is a benchmarking and profiling tool for Gstreamer pipelines.

Available Traces

Currently, the GstShark plug-in contains six different tracers. They were selected as the initial set of tracers based on RidgeRun's previous experiences fine-tuning GStreamer pipelines. These are known to provide very valuable debugging/profiling information, data which otherwise may only be available by hacking the source code of the core and involved elements. GstShark, however, is a very active project and new tracers are continuously developed and released.

The following list provides a summary of the current tracers. Later, in the User Guide section, they are described with detail.

InterLatency
The inter latency tracer measures the latency time at different points in the pipeline. Latency can be defined as the time a buffer will take to travel from its origin to the measured point. The pipeline's total latency will be the latency reported by the sink (most downstream) element. In the case multiple sink exist, the pipeline latency should be interpreted as the worst (higher) measurement reported. The in-between points refer to the contribution of each specific element to the overall latency.
ProcTime
The processing time tracer measures the time an element takes to produce an output given the corresponding input. Note that this only makes sense for filter elements (1 input and 1 output). The tracer supports the trivial 1:1 case in which an input buffer produces an output buffer, as well as more complex N:1 and 1:N cases, in which N input buffers combine to produce an output buffer and where a single input buffer produces N output buffers, respectively. The tracer, however, only reports times for elements which process the inputs and outputs in the same thread, since otherwise there is no way of determining the in/out correspondence without knowing internal details of the element.
Framerate
The framerate tracer measures the amount of frames that go through a src pad every second. In a single threaded pipeline these measurements will match. In a multi-threaded pipeline, however, the framerate values may report differently at different points in the pipeline. The typical scenarios where pipelines can be multithreaded are: multiple source elements, queues, and elements which internally process asynchronously.
ScheduleTime
The schedule time tracer measures the amount of time between two consecutive buffers in a sink pad. This value, although relates somehow to processing time, provides different information. This tracer is specially useful to analyse pipelines with live sources such as cameras or microphones, where frame periods and jitter are values that present significant importance. Buffer drops, inter-clock differences and kernel-scheduling problems are some of the problems that can be detected using this tracer.
CPUUsage
The CPU Usage tracer measures the CPU consumption every second. In multiprocessor systems this measurements are presented per core. It is important to notice that this measurement corresponds to the overall processor usage by the totality of processes currently running in the system, not to the pipeline's process independently. Currently, this tracer reads the data from /proc/stat, hence it's only available for GNU/Linux based systems.
Graphic
The Graphic tracer records a graphical representation of the current pipeline. This representation is equivalent to Gtreamer's Dot bin description, but conveniently integrated into GstShark. In X11-enabled systems, if the graphic tracer is enabled a standalone window will pop-up displaying the pipeline. Additionally, the dot file will be rendered into the trace file which can be viewed later using the Eclipse front-end.
Bitrate
The Bitrate tracer measures the current stream bitrate in bits per second. The bitrate measurement will print out every regularly every second for every source pad in the pipeline. This data is specifically useful at the output of encoders.
Queue Level
The Queue Level tracer measures the amount of data queued in every queue element in the pipeline. This data is specially useful when debugging latency and performance bottlenecks. A new trace will be printed every time a buffer enters a queue.
Identity
The Identity tracer will print information of every buffer that passes through every sink pad in the pipeline. This information contains PTS and DTS, duration, size, flags and even refcount. This tracer is useful for verifying timestamping and general pipeline scheduling problems.

Getting the Software

RidgeRun is releasing GstShark as a open source project for public availability.

You can download the source from

git@github.com:RidgeRun/gst-shark.git

OR

https://github.com/RidgeRun/gst-shark/

Installation Guide

Dependencies

GStreamer 1.7.1

GstShark leverages Gstreamer's tracing subsystem. This was added in the 1.7.1 version of the project. Newer versions of Ubuntu should include gstreamer-1.7.1 (or newer) in the official package repositories. If this is not your case, refer to the [following guide] to manually install the latest Gstreamer version alongside the system one.

To check your current GStreamer version run the following command:

~$ gst-launch-1.0 --version
gst-launch-1.0 version 1.7.1
GStreamer 1.7.1
Unknown package origin

If you do not have GStreamer installed at all, run the following commands to check the version available in your Ubuntu distro:

~$ apt-cache showpkg libgstreamer1.0-0
Package: libgstreamer1.0-0
Versions:
1.6.3-1ubuntu1 (/var/lib/apt/lists/cr.archive.ubuntu.com_ubuntu_dists_wily-proposed_main_binary-amd64_Packages) 
...

And, if the version is above 1.7.1, install it by running:

~$ apt-get install libgstreamer1.0-dev

Eclipse Mars.2

As a front-end, GstShark uses Eclipse. We recommend using the Mars.2 (4.5.2) version or above. The latest version of Eclipse can be downloaded from:

https://eclipse.org/downloads/

The necessary Eclipse plug-ins will be automatically installed by the GstShark plug-in.

Building GstShark Tracers

The GstShark project comes packed as a typical GNU open source project. To build it follow the next steps (they may vary according to your OS version):

./autogen.sh --libdir=/usr/lib/x86_64-linux-gnu/
make
sudo make install

Installing Eclipse PlugIn

The Eclipse plug-in may be installed in two ways:

  • RidgeRun's Eclipse repository: Will have the latest plug-in stable version. May not include all the newest features, but will be extensively tested.
  • Local project: May be an experimental version, but will contain the latest features developed so far.

Installation from RidgeRun's Eclipse Repository

1. Open Eclipse

Eclipse1.png

2. Go to: Help -> Install New Software...

Eclipse2.png

3. At "Work with:" write RidgeRun Gst-Shark eclipse site: "http://downloads.ridgerun.com/packages/eclipse-plugins/gst-shark/"

Eclipse3.png

4. RidgeRun Eclipse plugin should load after writing down the site, select "GstsharkFeature" and click "Next"

Eclipse4.png

5. Install Details, click "Next"

Eclipse5.png

6. I you agree Accept License Agreement and click "Finish"

Eclipse6.png

7. A security warning may be indicated, Click "OK" if you want to continue with installation.

Eclipse7.png

8. Eclipse should be restart, click "yes"

Eclipse8.png

9. When Eclipse restarts go to Workbench

Eclipse9.png

10. "GstShark" Menu should be present now.

Eclipse10.png

Installation from Local Project

1. Open Eclipse

Eclipse1.png

2. Go to: Help -> Install New Software...

Local1.png

3. At "Work with:" write the path where you have the GstShark project, appending eclipse/GstSharkSite. I.e: "file:///home/mgruner/Release/gst-shark/eclipse/GstSharkSite/"

Local2.png

4. Select GstSharkFeature from the dropdown list and click next

Local3.png

5. Continue from step 6. in the Installation from RidgeRun's Eclipse Repository section

User Guide

GstShark is an application implemented with the purpose of being a helpful tool on the tasks of profiling and debugging the performance of the GStreamer pipelines. Based on the RidgeRun's years of experience on the area of developing multimedia applications by using the GStreamer engine, the amount of issues that has been faced and fixed is without measure; but regardless the difficulty of the issue, finding it a solution was a burdensome process and that is why the need for an application of this nature was clear.

In order to find improvements to a given pipeline there are several aspects to take in consideration depending on the design of the project, e.g., there are times in which is needed a precise measure of the load on the CPU of the pipeline, or the delay between one element to another, or maybe an estimate of the synchronization between the source and the end of the pipeline, among others of the huge amount of possible measurements and actions that the user can be interested in with a GStreamer pipeline. Besides that there is the debugging process for finding fixes, since there is always the possibility of encountering an error or a not wanted behavior of the pipeline; the process for finding the cause for an error on a GStreamer pipeline, measuring the amount of frames dropped or determining what can be the cause for wrong set timestamps, it is very difficult and troublesome, too. GstShark is an alternative to simplify all these processes by providing to the user the facility of getting accurate measurements without having to go through a heavy and confusing process, therefore improving the time needed to fix a determine issue or providing enough information for devising an improvement to the design without so much effort.

GstShark can be divided in two main sections: the tracers integrated to GStreamer so it can be enabled on any pipeline at the time of running it and the Eclipse plugin, in which the analysis of the data obtained with the tracers can be processed and used to get more understandable and user friendly results. At following these main sections will be described detailed in order to introduce the user to each one of their sections and how to implement them to get better results.


GStreamer Tracers

Before entering in details explaining every tracer available on GstShark and how to implement each one of them; it is important to explain how to enable the debug output of GstShark once it is already installed, how to select what tracer is going to be used for profiling/debugging the current pipeline and mention several capabilities of the GstShark that can result helpful at the time of using it to profile/debug the performance of a given pipeline, including the environment variables for enabling/disabling some of the features of GstShark.


Enable GstShark on the Debugging Log

One of the possible output for the measurements of GstShark is the debugging log of GStreamer; since GstShark is built and installed as a plugin of GStreamer, it can use the debugging log to print the results of every tracer on the terminal when the pipeline is still running and therefore displaying the results instantaneously. In order to enable this option there is no step needed on the GstShark side, it is only matter of enabling the GStreamer debugging log with the correspondent level.

The GStreamer debugging levels are the following ones:

0 none No debug information is output.
1 ERROR Logs all fatal errors. These are errors that do not allow the core or elements to perform the requested action. The application can still recover if programmed to handle the conditions that triggered the error.
2 WARNING Logs all warnings. Typically these are non-fatal, but user-visible problems are expected to happen.
3 FIXME Logs all fixme messages. Fixme messages are messages that indicate that something in the executed code path is not fully implemented or handled yet. The purpose of this message is to make it easier to spot incomplete/unfinished pieces of code when reading the debug log.
4 INFO Logs all informational messages. These are typically used for events in the system that only happen once, or are important and rare enough to be logged at this level.
5 DEBUG Logs all debug messages. These are general debug messages for events that happen only a limited number of times during an object's lifetime; these include setup, teardown, change of parameters, ...
6 LOG Logs all log messages. These are messages for events that happen repeatedly during an object's lifetime; these include streaming and steady-state conditions.
7 TRACE Logs all trace messages. These messages for events that happen repeatedly during an object's lifetime such as the ref/unref cycles.
9 MEMDUMP Log all memory dump messages. Memory dump messages are used to log (small) chunks of data as memory dumps in the log. They will be displayed as hexdump with ASCII characters.


Since GstShark is a tool for profiling/debugging the performance of a pipeline using the tracer developed, it will use the category number 7 to print the results on the debug output for every tracer enabled; but since this category will include all the others levels of debugging the log can get a lot of noise and because of that be very difficult to read and to get the tracer information desired, that is why it is strongly recommended to specify the debugging levels by name and for the purpose of GstShark case the information wanted is related to "GST_TRACER".

The following pipeline is an example of how to enable the GstShark tracer's output on the debugging log of GStreamer:

  • GStreamer 1.7.1
GST_DEBUG="GST_TRACER:7" GST_TRACER_PLUGINS="cpuusage" gst-launch-1.0 videotestsrc ! videorate max-rate=15 ! fakesink
  • GStreamer 1.8.1
GST_DEBUG="GST_TRACER:7" GST_TRACERS="cpuusage" gst-launch-1.0 videotestsrc ! videorate max-rate=15 ! fakesink


Select Tracer

GstShark has several tracers at the provision of the user depending on his/her own needs and even when there is the possibility of using all the tracer at the same time for having a complete profile of one pipeline, there is also the chance of being interested on a specific kind of information that can be provided by only one, or a few, of the tracers and therefore the other ones would only introduce noise to the debugging log; because of that and with the purpose of taking advantage of the tracer logic of GStreamer, the user has the capability of selecting the tracers that will be run with the pipeline by listing them (separated by the character ";") using the option "GST_TRACER_PLUGINS" or "GST_TRACERS", depending on the GStreamer version, at the time of running the pipeline in a very similar way than the "GST_DEBUG" option.

The following pipeline is an example of how to select various tracers for analyzing a given pipeline and obtaining the results on the debugging log of GStreamer:

  • GStreamer 1.7.1
GST_DEBUG="GST_TRACER:7" GST_TRACER_PLUGINS="cpuusage;proctime;framerate" gst-launch-1.0 videotestsrc ! videorate max-rate=15 ! fakesink
  • GStreamer 1.8.1
GST_DEBUG="GST_TRACER:7" GST_TRACERS="cpuusage;proctime;framerate" gst-launch-1.0 videotestsrc ! videorate max-rate=15 ! fakesink


Define Output Location

GstShark has the feature of generating several output files in which the log, with all the results, of every tracer enabled is store together with the graph of the pipeline run (if the respective tracer was set), these files can be used later on Eclipse to obtain information about the performance of the pipeline in more user friendly way (explained later on Eclipse Plugin). The location of the output files can be set by the user by modifying the environment variable "GST_SHARK_LOCATION", by setting this option no matter the path in which the pipeline is run the tool will place the output files on the correspondent folder.

It is important to mention that if the environment variable is not set at the time of running the pipeline the default output is a new folder in the current path with the name "gstshark_YY-MM-DD_hh:mm:ss", where YY-MM-DD_hh:mm:ss correspond to the time stamp when the pipeline started running.

The following command is an example of how to define the environment variable for setting the desired location of the GstShark output files:

export GST_SHARK_LOCATION=/tmp/

If the user wants to use the default location of the GstShark output files is only needed to unset the environment variable by using the following command:

unset GST_SHARK_LOCATION


Enable/Disable Output Files

One of the main features of GstShark is the ability of generating several output files, as it was commented previously, those files can be used together with the Eclipse plugin (explained later on Eclipse Plugin) for analyzing the performance of the pipeline offline at any time that the user might want; saving the user for having to run again the pipeline to get the same results, since they are already stored in a defined location. However, there could be the scenario in which no output file is needed or wanted, that is why GstShark has the capability of disabling the generation of these files by setting the environment variable "GST_SHARK_CTF_DISABLE" previous of running the pipeline; the content of the variable will not affect the result obtained since the tool only checks that it is set but no its content, but by standard is recommended to set it with the values "TRUE" or "1".

The following command is an example of how to define the environment variable that will disable the generation of the output files on GstShark:

export GST_SHARK_CTF_DISABLE=TRUE

Note: If the GST_SHARK_CTF_DISABLE is defined, with any content, it will disable the generation of the output files but not the store of the DOT file if the graphic tracer is set.


If the user wants to enables the generation of the GstShark output files, it is only needed to unset the environment variable by using the following command:

unset GST_SHARK_CTF_DISABLE


InterLatency

As it was mentioned previously the inter latency tracer measures the time that a buffer takes to travel from one point to another inside the pipeline, the pipeline's total latency is what is known as the latency of the pipeline and correspond to the amount of time that the buffer needs to go from the source to the sink element (most downstream); but the inter latency tracer makes possible to know what route of the pipeline is adding more time to the overall latency by displaying the exact amount of time that the buffer took to go through every element on the pipeline.

It is important to mention that the inter latency tracer has the limitation that it only provides accurate data when the the elements on the pipeline correspond to the categories: source elements, sink elements or filter elements. Furthermore, since the operation of the tracer is based on tracking the buffers through the pipeline, filter-like elements that receive one buffer and generate several buffers, as some decoders, or viceversa, as some encoders; are not supported and even when the tracer will not crash, the results provided will not be accurate.

The following pipeline, used as an example, is compounded by 4 elements numbered from left to right; the element number 1 (videotestsrc) corresponds to the source element and the origin of the buffers, this point is the reference point and the one that is going to be used as the zero for determining the elapsed time since the buffers went out of the source element and arrived on the source pad of the next elements or in case of the sink element on the sink pad. Every element is connected from its own source pad to the sink pad of the next element as it is shown on the following graph. Besides the pipeline and the graph of it, at following there is the output log of the inter latency tracer and for this scenario it will display the measurement of 3 times: the time elapsed from the source pad of element 1 to the source pad of element 2, the time elapsed from the source pad of element 1 to the source pad of element 3 and the time elapsed from the source pad of element 1 to the sink pad of element 4; every measure for every buffer that the source element generates.

Graph

Test interlatency.png

Pipeline

GST_DEBUG="GST_TRACER:7" GST_TRACERS="interlatency" gst-launch-1.0 videotestsrc ! queue ! videorate max-rate=15 ! fakesink sync=true

Output

0:00:00.026396503 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.000016902;
0:00:00.026888230 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.000016959;
0:00:00.026911857 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.000041987;
0:00:00.027252412 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.000381212;
0:00:00.027444179 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.000020869;
0:00:00.027469893 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.000048479;
0:00:00.093939023 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.066506945;
0:00:00.094008623 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.066091302;
0:00:00.094042642 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.066125918;
0:00:00.160586394 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.132659225;
0:00:00.160640717 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.132255290;
0:00:00.160661441 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.132276967;
0:00:00.227258610 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.198863362;
0:00:00.227318125 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.198406393;
0:00:00.227338294 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.198427410;
0:00:00.293916887 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.264994676;
0:00:00.293987937 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.264561313;
0:00:00.294007890 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.264591712;
0:00:00.360703568 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.331259826;
0:00:00.360895010 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.330989798;
0:00:00.361004197 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.331103260;
0:00:00.427356727 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.397441166;
0:00:00.427533841 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.397175319;
0:00:00.427610784 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.397256833;
0:00:00.493957625 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.463584364;
0:00:00.494125217 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.463312392;
0:00:00.494200025 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.463389796;
0:00:00.560660203 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.529831917;
0:00:00.560819125 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)queue0_src, time=(string)0:00:00.529558354;
0:00:00.560891808 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)videorate0_src, time=(string)0:00:00.529633703;
0:00:00.627362691 12413 0x18c8f20 TRACE GST_TRACER :0:: interlatency, from_pad=(string)videotestsrc0_src, to_pad=(string)fakesink0_sink, time=(string)0:00:00.596084955;

As it was mentioned before the inter latency tracer provides the times that the buffer took to get from the source pad of the source element to every next element's source pad, and as it is shown on the output log every time displayed has precision in the order of nanoseconds and as it is expected the higher value correspond to the time needed for the buffer to travel from the source element to the sink element at the end of the pipeline.

ProcTime

Processing time was developed with the purpose of providing useful information to the user about the amount of time each element, of a given pipeline, is taking for processing each data buffer that goes through it, allowing the user to have knowledge about of how the performance of the pipeline is being affected by each element. This tracer corresponds to one of the tracers developed that are oriented to help the user to determine which element or process is increasing the general latency of the pipeline; in other words, it is a tool to check which element is taking too much time completing its tasks, causing a slow performance of the pipeline which can lead to a delay on the buffers and this being reflected on a wrong synchronization of the audio and video buffers, as an example.

The processing time tracer is designed for providing accurate data for the filter and filter-like elements of the pipeline. Based on the fact that the concept of the tracer is to measure the time an element takes to generate an output buffer when it receives a buffer on its sink pad, the source and sink elements are discarded since in order to the tracer to work properly it is needed that the element has both input and outputs pads. As it was mentioned previously the tracer supports the filter elements, in which an input buffer produces an output buffer, as a video converter; and the filter-like elements, more complex elements in which several input buffers combine to produce an output buffer, as an encoder or a muxer, or where a single input buffer produces several output buffers, as demuxer or a decoder; regardless of the amount of source and sink pads that the element has. However the tracer has the limitation that it only can report times for elements which process the inputs and outputs in the same thread, since the the idea of the tracer is to display a general measurement of the time elapsed processing a buffer without knowing the operation of element in detail.

At following there is an pipeline used as an example to demonstrate how to use the processing time tracer and the results obtained with it. The pipeline consists on 9 elements, numbered from left to right, connected as it is shown on the following graph; on the pipeline there are some elements that have a delay set, this with the purpose of increasing the time needed by that element to finish the processing of every buffer; this forced delay should be reflected on the measurements for every element on the output log of the tracer. Besides the graph of the pipeline used, at following is available the pipeline for testing along with the output log obtained.

Graph

Test proctime.png

Pipeline

GST_DEBUG="GST_TRACER:7" GST_TRACERS="proctime" gst-launch-1.0 videotestsrc ! identity sleep-time=8500 ! tee name=t t. ! queue ! identity sleep-time=50000 ! fakesink t. ! queue ! identity sleep-time=30000 ! fakesink

Output

0:00:00.030766061 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008613006;
0:00:00.040429334 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008651178;
0:00:00.050182040 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008657576;
0:00:00.059830835 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008609986;
0:00:00.060992920 12926 0x24c68a0 TRACE GST_TRACER :0:: proctime, element=(string)identity2, time=(string)0:00:00.030099057;
0:00:00.069503703 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008655531;
0:00:00.079200604 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008655985;
0:00:00.081049528 12926 0x24c6a80 TRACE GST_TRACER :0:: proctime, element=(string)identity1, time=(string)0:00:00.050154409;
0:00:00.088951094 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008593832;
0:00:00.099182356 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008612803;
0:00:00.108961732 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008589039;
0:00:00.112001889 12926 0x24c68a0 TRACE GST_TRACER :0:: proctime, element=(string)identity2, time=(string)0:00:00.030088968;
0:00:00.117951345 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008585748;
0:00:00.126983880 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008579732;
0:00:00.132033587 12926 0x24c6a80 TRACE GST_TRACER :0:: proctime, element=(string)identity1, time=(string)0:00:00.050077050;
0:00:00.136588166 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008574922;
0:00:00.142125118 12926 0x24c68a0 TRACE GST_TRACER :0:: proctime, element=(string)identity2, time=(string)0:00:00.030071961;
0:00:00.145779940 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008578415;
0:00:00.155090678 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008583098;
0:00:00.164369513 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008598880;
0:00:00.172292595 12926 0x24c68a0 TRACE GST_TRACER :0:: proctime, element=(string)identity2, time=(string)0:00:00.030082726;
0:00:00.173588683 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008585821;
0:00:00.182214111 12926 0x24c6a80 TRACE GST_TRACER :0:: proctime, element=(string)identity1, time=(string)0:00:00.050107075;
0:00:00.182807057 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008586735;
0:00:00.191972095 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008602782;
0:00:00.201634649 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008626978;
0:00:00.202440357 12926 0x24c68a0 TRACE GST_TRACER :0:: proctime, element=(string)identity2, time=(string)0:00:00.030097713;
0:00:00.211297237 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008590217;
0:00:00.220964963 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008653206;
0:00:00.230604043 12926 0x248bf20 TRACE GST_TRACER :0:: proctime, element=(string)identity0, time=(string)0:00:00.008602488;

As it is shown on the output log of the pipeline run the results displayed correspond only to the identity elements, this is because as it was mentioned before the processing time only works for filter and filter-like elements, which is why there are no results for the videotestsrc, the tee and neither for the fakesink; on the other side, the queue element is a filter element and it is not being measured by the processing time tracer and this is why the queue element processes the buffers in different threads, therefore it is not possible to be measured by the processing time tracer. Regarding the results of the identity elements, verifying that the time needed by each element to process its incoming buffer matches the delay set previously on the pipeline is straightforward, which exemplifies the operation of the tracer and what can be expected from it.

Framerate

The frame rate of a pipeline is one of the most useful characteristics, especially those one who include live video source elements, since it is normally a design parameter and a requirement for declaring the performance of a pipeline as successful; also, the frame rate is usually used for determining if the output of pipeline, in which there are video and audio involved, is synced. As its own name says frame rate is the measurement of the frame frequency, that means that it corresponds to the measurement of the amount of frames that go through the source pad of certain element every defined amount of time. Normally frame rate is expressed in frames per second (FPS). Therefore what the framerate tracer does is displaying the amount of frames that goes through every source pad of every element of the pipeline and it is updated and printed on the output log every second.

Frequently in a single threaded pipeline the measurements will match for every element of the pipeline but this cannot be taken for granted, there could be the case in which the frame rate of different elements do not match and this does not affect negatively the outcome of pipeline; i.e., in a multi-threaded pipeline the frame rate values might report differently at different points in the pipeline and still have a good performance. The typical scenarios where pipelines can be multi-threaded are: multiple source elements, queues, and elements which internally process asynchronously.

In order to exemplify how to use this tracer and result that the user might get from it, the following pipeline was used; the pipeline have only three elements, numbered from left to right, in which one of them is a sink element and because of the nature of the measurement and the tracer (frame rate is only measured on the source pads of the elements) the frame rate of this element is not being displayed. The elements are connected as the following graph shows and below that there is the pipeline with the output log obtained from it.

Graph

Test framerate.png

Pipeline

GST_DEBUG="GST_TRACER:7" GST_TRACERS="framerate" gst-launch-1.0 videotestsrc ! videorate max-rate=15 ! fakesink sync=true

Output

0:00:01.229360710 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)21;
0:00:01.229470965 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)20;
0:00:02.229603561 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:02.229704469 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:03.229813070 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:03.229931789 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:04.229094685 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:04.229177391 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:05.229269122 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:05.229311209 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:06.229392913 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:06.229459998 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:07.229506584 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:07.229543062 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:08.229586552 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:08.229645689 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:09.229695974 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:09.229738533 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:10.229795081 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:10.229884141 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:11.229029697 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:11.229109025 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:12.229201141 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:12.229264451 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:13.229362416 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:13.229399786 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:14.229519683 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:14.229598836 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;
0:00:15.229713012 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videotestsrc0_src, fps=(uint)15;
0:00:15.229755348 12840 0x18d5200 TRACE GST_TRACER :0:: framerate, pad=(string)videorate0_src, fps=(uint)15;

In this case the results obtained are easy to verify, the pipeline only has one source element which means it only has one thread of processing; therefore, every element on the pipeline should have the same frame rate and since one of the element was a filter with the functionality of setting the frame rate of the pipeline (set to a value of 15), every frame rate measurement done on every source pad of the pipeline has to be equal to value set on that element.

ScheduleTime

The schedule time tracer is very similar to the processing time tracer, and specially on the output logs. The processing time is focused on measuring the time that an element takes to completely process a given buffer and that is why it is measured on the source pad of that element, because when the buffer gets to that point is when the element has finished its process and the buffer is ready to be pushed to next element; on the other side, what the scheduling time tracer does is measuring the elapsed time since one buffer arrived to the sink pad of the element with the immediate consecutive one, without taking in consideration the time needed for the element to compute the arrived data buffer.

In the same way than the processing time tracer and the inter latency tracer, the schedule time tracer was developed with the purpose of providing precise data to the user about the amount of time needed by the pipeline to process a given data buffer completely and with this giving an idea of the performance of the pipeline. In difference with the processing time tracer and the inter latency tracer this tracer is not focused on the performance of the pipeline, instead it is oriented to help the user to detect when and where is occurring a lost of information; specially in pipelines where live sources are present for audio and video and the outcome has several issues with synchronization or with frozen images. The schedule time tracer is specially useful for detecting bottle necks on the pipeline, buffer drops, inter-clock differences, among others.

In order to demonstrate the operation of the schedule time tracer the following pipeline was used for testing; this pipeline has six elements, numbered from left to right, connected as the following graph shows. This tracer does not have any limitation with category of the element that is being measured, the only requirement is that it has to have a sink pad because of the nature of the tracer, which is why the source element are not analyzed with this tracer. Besides the graph of the pipeline used, at following is available the pipeline for testing along with the output log obtained.

Graph

Test scheduletime.png

Pipeline

GST_DEBUG="GST_TRACER:7" GST_TRACERS="scheduletime" gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! 'video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, framerate=(fraction)10/1' ! videoconvert ! queue ! avenc_h263p ! fakesink sync=true

Output

0:00:01.100036113 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.099975209;
0:00:01.100077742 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.100006848;
0:00:01.110098321 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.095063097;
0:00:01.110171350 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.095097492;
0:00:01.144021899 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.106929909;
0:00:01.196037609 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.096002535;
0:00:01.196077429 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.096000107;
0:00:01.205692317 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.095595742;
0:00:01.205761485 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.095590914;
0:00:01.230904816 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.086886102;
0:00:01.296035071 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.099998787;
0:00:01.296076145 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.099998340;
0:00:01.305749996 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.100057851;
0:00:01.305831402 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.100059523;
0:00:01.338188493 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.107283618;
0:00:01.396034758 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.099999355;
0:00:01.396074586 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.099998790;
0:00:01.406017644 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.100268044;
0:00:01.406085333 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.100264111;
0:00:01.422747303 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.084559137;
0:00:01.496031029 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.099996185;
0:00:01.496071205 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.099996202;
0:00:01.505720264 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.099703471;
0:00:01.505797802 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.099702877;
0:00:01.518303910 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.095557221;
0:00:01.596034015 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.100002760;
0:00:01.596073322 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.100002398;
0:00:01.606089673 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.100368610;
0:00:01.606159476 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.100371629;
0:00:01.622523839 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.104218242;
0:00:01.696032908 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)capsfilter0_sink, time=(string)0:00:00.099998663;
0:00:01.696074070 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)videoconvert0_sink, time=(string)0:00:00.100000780;
0:00:01.706234597 13338 0x16532d0 TRACE GST_TRACER :0:: scheduletime, pad=(string)queue0_sink, time=(string)0:00:00.100140205;
0:00:01.706300516 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)avenc_h263p0_sink, time=(string)0:00:00.100140252;
0:00:01.711455532 13338 0x1653400 TRACE GST_TRACER :0:: scheduletime, pad=(string)fakesink0_sink, time=(string)0:00:00.088935196;

The measurements done by the schedule time tracer correspond to the difference between the arrival of a data buffer on the sink pad of an element and the arrival of the following buffer; normally when the behavior of the pipeline is the expected one, and and as it can be seen on the output log above, these values measured tend to maintain constant for the same element once the pipeline is stable, which is related to the frame rate and the processing time of the element but that it can be appreciated by using the schedule time tracer as the previous example and with that verify that there are not buffer drops or any other possible issue that could cause a delay, higher than the expected one, between two immediate buffers.

CPUUsage

The CPU usage tracer was designed to measure the load on the CPU at the time of running a determined pipeline, giving the user the possibility of knowing immediately if the host system is capable of running the pipeline without overcharging the cores of the CPU. It is important to mention that the measurements displayed every second by the tracer correspond to the total load of the CPU and it is not the exact data of the load of the pipeline at the time of running it; but regarding of this, the measurements done and printed on the output log give a pretty good idea of how the load of the CPU is behaving at the time of running the pipeline and with this it is possible to check if the pipeline produces an unexpected and undesired increase of the load of the CPU that could cause a failure on the system or a wrong operation of the pipeline, without mattering if it is correctly designed.

The CPU usage tracer has the capability of measuring every core of the system, ergo, in a multiprocessor system the tracer is capable of measuring the load on every core of the CPU and showing it to the user on the output log, giving an idea of the effect that the pipeline has on the total CPU consumption on the system.

Currently this tracer is only available for GNU/Linux based systems, since the method used to determine the load on every core available is reading the /proc/stat file and this path belongs only the GNU/Linux system the CPU usage tracer cannot be use on any other systems like a Macintosh box or a Microsoft Windows box; GstShark has already integrated a functionality to check the Operating System of the host in which it is running and depending on this it will disable the CPU usage tracer if it is needed, preventing the user of dealing with unnecessary errors.

In order to provide an example and show the output log that the CPU usage tracer generates; the following pipeline was tested, it contents nine elements numbered from left to right connected as the following graph shows. It is important to notice that the CPU usage tracer has no limitation with any element category which means, and as it exemplified on the following pipeline, that the tracer is able of providing the CPU consumption, in-live, at the time of running the pipeline regardless of the elements used on the pipeline; in this example there are elements of all categories available to demonstrate this point. Besides de the pipeline and graph of it, there is also the output log obtained with the CPU usage tracer that in this case was tested on a multiprocessor system with a total 8 cores available.

Graph

Test cpuusage.png

Pipeline

 GST_DEBUG="GST_TRACER:7" GST_TRACERS="cpuusage" gst-launch-1.0 videotestsrc ! 'video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! videorate max-rate=30 ! videoconvert ! queue ! avenc_h263p ! queue ! avimux ! fakesink sync=true

Output

0:00:00.021681078 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)0, load=(double)5,965772;
0:00:00.021712923 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)1, load=(double)3,932041;
0:00:00.021720127 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)2, load=(double)6,802571;
0:00:00.021725434 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)3, load=(double)4,453254;
0:00:00.021730069 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)4, load=(double)5,946335;
0:00:00.021734686 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)5, load=(double)2,949401;
0:00:00.021739030 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)6, load=(double)6,299569;
0:00:00.021743425 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)7, load=(double)3,650192;
0:00:01.022001083 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)0, load=(double)5,050505;
0:00:01.022046715 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)1, load=(double)10,891088;
0:00:01.022060643 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)2, load=(double)8,000000;
0:00:01.022070346 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)3, load=(double)19,387754;
0:00:01.022079678 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)4, load=(double)23,762377;
0:00:01.022088843 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)5, load=(double)2,020202;
0:00:01.022097063 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)6, load=(double)9,900990;
0:00:01.022105207 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)7, load=(double)4,040404;
0:00:02.022299915 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)0, load=(double)4,901961;
0:00:02.022336454 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)1, load=(double)3,030303;
0:00:02.022347269 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)2, load=(double)7,070707;
0:00:02.022356243 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)3, load=(double)10,101010;
0:00:02.022366425 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)4, load=(double)7,843138;
0:00:02.022377344 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)5, load=(double)14,141414;
0:00:02.022390904 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)6, load=(double)23,000000;
0:00:02.022405063 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)7, load=(double)0,000000;
0:00:03.022832110 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)0, load=(double)4,950495;
0:00:03.022934822 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)1, load=(double)3,030303;
0:00:03.022973647 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)2, load=(double)5,940594;
0:00:03.023005315 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)3, load=(double)10,101010;
0:00:03.023039830 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)4, load=(double)7,920792;
0:00:03.023075592 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)5, load=(double)15,000001;
0:00:03.023110909 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)6, load=(double)10,784314;
0:00:03.023144687 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)7, load=(double)2,000000;
0:00:04.023416231 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)0, load=(double)6,862745;
0:00:04.023451754 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)1, load=(double)3,960396;
0:00:04.023461598 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)2, load=(double)20,202021;
0:00:04.023468868 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)3, load=(double)18,181818;
0:00:04.023475339 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)4, load=(double)8,823529;
0:00:04.023481643 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)5, load=(double)2,020202;
0:00:04.023487167 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)6, load=(double)5,102041;
0:00:04.023492490 13940 0xce2c50 TRACE GST_TRACER :0:: cpuusage, number=(uint)7, load=(double)1,010101;

The output log of the CPU usage presents the ID of the core that is being measured and the load of it in percentage format with 0% as the minimum value and 100% as the maximum value; the measurements are also displayed with several decimals values in order to increase the accuracy. From the results above is not possible to specify the load that the pipeline is consuming but it is possible to ensure that the pipeline does not represent a heavy task for the system and that the system is capable of running pipeline, along with some others processes, successfully without affecting the performance neither of the system or pipeline.

Graphic

The graphic tracer is an add-on of the GstShark application since it does not generate any output log providing any information about the performance of the pipeline neither helps the user with the debugging process at the time of facing an unexpected issue on the behavior of a pipeline; but this tracer is very useful since it shows to the user the pipeline graphically so the user can easily inspect it visually. In the examples of the previous tracers the pipelines used were very simple and small which makes them very easy to inspect but this is not the case of the applications developed normally in the actual market, there are scenarios in which an application uses a more complex pipeline with a lot of elements interconnected with several source elements, tees, muxers, encoders, decoders, among others; in those cases is very difficult to inspect and review every section of a pipeline manually and here is when the Graphic tracer becomes handy, the tracer provides the graphical representation of the pipeline without having to set any other environment variable and it shows the graph instantaneously when the pipeline starts to run, the tracer even stores the graph of the pipeline together with the output log generated by the others tracers and can be opened with the GstShark Eclipse plugin at any time that the user might want after the pipeline stopped its operation.

All the pipelines graphs showed on the others tracers before were done using the Graphic tracer, as an example of the results that can be obtained with this tracer; however, as it was mentioned previously, the pipelines used above are straightforward of inspect and that is why at following there is a more complx pipeline with its correspondent graph obtained with the help of the GstShark Graphic tracer.

It is important to mention that the generation of the file in which the graph is stored cannot be disabled in the same way than the output log of all the tracers above, by defining the environment variable "GST_SHARK_CTF_DISABLE"; if the tracer Graphic is selected at the time of running a pipeline the graph of it will be generated and stored on a new folder with the name "graphic", the location of this folder depends on if the output log of the others tracers is enabled and if there is another tracer running along with the Graphic tracer: in the case that the Graphic tracer runs alone on the pipeline the graph will be stored in the new folder ("graphic") in the current path, the same will happen if the are others tracers running alone with the Graphic tracer but the environment variable "GST_SHARK_CTF_DISABLE" is set; in the case that the Graphic tracer is running along with other tracers and the option for generating the output log is enabled, the graph will be stored in the new folder ("graphic") inside the folder created to store the output log (by default "gstshark_YY-MM-DD_hh:mm:ss").

Pipeline

GST_DEBUG="GST_TRACER:7" GST_TRACERS="graphic" gst-launch-1.0 videotestsrc is-live=true ! tee name=t ! valve drop=TRUE ! queue max-size-buffers=0 ! avenc_ljpeg  multifilesink next-file=0 enable-last-sample=FALSE async=FALSE sync=FALSE location=./example_snapshot_%d.jpg t. ! queue max-size-buffers=0 ! 'video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, framerate=(fraction)10/1' ! videoconvert ! queue max-size-buffers=0 ! avenc_h263p ! identity ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=9000000000 ! mux. audiotestsrc ! volume ! avenc_mp2 bitrate=128 ! identity ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=9000000000 ! matroskamux name=mux ! multifilesink next-file=1 location=example_50mb_%s.mkv enable-last-sample=FALSE sync=TRUE async=FALSE

Graph

Test graphic.png

Eclipse Plugin

As it was mentioned in previous sections the main idea of developing an Eclipse plugin was to facilitate the task of analyzing the data obtained through the GstShark tracers explained above, in this same line the Eclipse plugin is really helpful for processing and providing a more understandable and intuitive representation of the data so it can be easier to the to interpret than the console output log or the output file written in CTF format.

An important step for using the Eclipse plugin is that is needed to have the output log saved in a CTF format, this is done by the tracer itself by default no matter which one of the tracers is run with the pipeline as it was explained and detailed in the previous section together with the environment variables that let you choose the path of the output folder and the possibility of generating or not the output log files in CTF format (GStreamer Tracers).

The GstShark Eclipse plugin is a feature of the application for simplifying the operation of reviewing the results obtained by means of a more friendly interface than the console output log and the possibility of generating plots of the measurements as it is going to be explained later on this site; therefore the best way to explain how the plugin works and how to get the best of it is by an example, since a demonstration of the capabilities of the plugin will be better for its comprehension than only describing its performance (the supposition here is that the GstShark Eclipse plugin is already installed and running according to the section Installation from RidgeRun's Eclipse Repository). The pipeline used for this purpose is the one presented at following:

GST_DEBUG="GST_TRACER:7" GST_TRACERS="graphic;cpuusage;framerate" gst-launch-1.0 videotestsrc num-buffers=500 ! 'video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! videorate max-rate=30 ! videoconvert ! queue ! avenc_h263p ! queue ! avimux ! fakesink sync=true

Note: The GstShark Eclipse plugin has the limitation that it can only plot the results of the tracers CPU usage and frame rate; the team is working on developing more functions in order to add support for the remaining tracers but so far the performance is limited to these two. However, in the GstShark package are included Octave scripts with the purpose of generating visual representations of the results obtained by any tracer, the instructions and details for using these scripts are in GstShark Examples.

After running the previous pipeline and having the output files correctly saved in a known path the following step is opening the Eclipse application, if the installation of the GstShark plugin finished as it is expected then the following image shows the interface that should appear at the time of opening Eclipse.

GstShark-Eclipse-Tutorial1.png

The next step will be opening the output file and loading results obtained at the time of running the pipeline, in order to do this the user will have to navigate through the menu bar until we get the option "GstShark->Open Trace" as it shown in the following image.

GstShark-Eclipse-Tutorial2.png

A new window will appear giving the option to the user of exploring through the file system of the host PC to find the path in which the output files are stored, when that folder is accessed the user should see what is presented in the following image and should select the "datastream" file so the plugin can successfully load the results gathered.

GstShark-Eclipse-Tutorial3.png

Once the "datastream" file was successfully loaded the user should expect to see something similar to the following image, in which the results of every tracer run with the pipeline are shown by category; for the purpose of the current example the tracer present on the image will be limited to "cpuusage" and "framerate", since those are the tracers that were selected at the time of running the demonstration pipeline mentioned above.

GstShark-Eclipse-Tutorial4.png

One of the features that the GstShark Eclipse plugin has is the capability of showing the graphical diagram of the pipeline run; as it was explained in previous section, adding the "graphic" tracer to the execution of the pipeline will export the diagram of it to an additional folder inside the selected path, the one in which the output files are being saved, in DOT format. By means of the menu "GstShark->Show Pipeline Diagram" in the Eclipse plugin window, it is possible to check the output of this tracer without doing any additional effort for converting the format of the diagram to a regular image file type; the following image shows the menu used for this purpose on the GstShark Eclipse plugin.

GstShark-Eclipse-Tutorial5.png

After accessing the previous menu a new window will appear giving the option to the user of going through the file system of the host PC in order to find the path in which the DOT file is stored, the following image is a demonstration of what the user can see when that folder is accessed.

GstShark-Eclipse-Tutorial6.png

When the file "pipeline.dot" (default name for the pipeline diagram) is opened by the process described above, the GstShark Eclipse plugin will convert the format of the file to a regular image format, e.g. PNG, JPEG, among others; then it will show the output image diagram to the user on a new window. It is important to mention that the GstShark plugin has a known issue related to this feature and it is specifically related to the size of the window opened; if the pipeline diagram is too big the window will cut it on the edges showing only what fits the size of the window, extending the size of the window will allow the user to see more of the pipeline diagram but there is no way to make it smaller (zoom in and zoom out functions are not supported yet) and therefore the pipeline diagram will always appear cut as the following image shows.

GstShark-Eclipse-Tutorial7.png

Another feature available through the GstShark Eclipse plugin is the capability of searching and filtering the results obtained by the tracers that generated them, this is useful when the user has interests on checking the behavior of a specific parameter but because of time or resources the pipeline was run with several tracers at the same time. This feature can be accessed by typing the search parameter on the top of the column "Event type" as it is shown on the following image.

GstShark-Eclipse-Tutorial13.png

As a demonstration of a the search capability of the GstShark Eclipse plugin, the following image shows the output obtained by searching the value "cpuusage" with the instructions explained before.

GstShark-Eclipse-Tutorial8.png

Note: It is important mention that in order to clean the searching results there is only needed to erase the searching parameter and this will undo the search done.

Filtering the output results is also a feature available on the GstShark Eclipse plugin and it is a very simple process very similar to one used for searching a specific "Event type". However in order to access the filtered results the user first needs to change mode of operation and this is done by toggling the icon marked on the following image; this icon has no legend but it is pretty simple to distinguish, when the icon is similar to a lantern with the light on is because the GstShark Eclipse plugin is on searching mode and when the icon are three rows pointing to the right is because the GstShark Eclipse plugin is on filtering mode.

GstShark-Eclipse-Tutorial14.png

The method used to introduce the filter parameter is similar to one used to insert the search parameter, since it is only needed to type the specific parameter that is going to be use for filtering in the "Event type" field as it is marked on the following image.

GstShark-Eclipse-Tutorial15.png

In order to demonstrate the filtering feature that GstShark Eclipse plugin has, the following image shows the output obtained by filtering the value "cpuusage" with the instructions explained before.

GstShark-Eclipse-Tutorial16.png

Note: As an important mention, in order to clean the filtering results it is needed to click the right button of the mouse and select the option "Clear filters", otherwise the data will remain filtered even when the filter parameter has been removed.

The main feature and the reason for what the GstShark Eclipse plugin was developed is to generate an alternative to interpret the results obtained at the time of running the pipeline by providing to the user the option of plotting the information gathered, depending on the tracer to which the data belong, versus the time elapsed during the execution of the pipeline. The information used to make this plots is filtered automatically by the GstShark Eclipse plugin which means that the only step that the user has to do is to open a new window in which the plot can be shown. Thus, in order to open the correspondent plots of the tracers run with the pipeline is needed to open a new view on the GstShark Eclipse plugin and the method to do this is by accessing the menu "Window->Show view->Other..." as it is shown in the following image.

GstShark-Eclipse-Tutorial9.png

After accessing the menu "Window->Show view->Other..." a small list of option should appear, showing the many options of type of windows available to be opened on the GstShark Eclipse plugin; however, for matters of plotting the results obtained with the GstShark tracers, the views of interest are categorized under the name "Other" as it is shown on the following image where the two kinds of plots available appear on the top of the list.

GstShark-Eclipse-Tutorial10.png

The pipeline mentioned previously on this section was run in order to exemplify the features of the GstShark Eclipse plugin and the plots of the results obtained are not the exception; that is why following the steps described before and selecting either "Window->Show view->Other...-->GstSharkCpuUsage" or "Window->Show view->Other...-->GstSharkFrameRate", for the CPU usage and frame rate plot, respectively; the user can access the plots of the results obtained with the tracers supported by the GstShark Eclipse plugin. At following there are the plots of the results of the CPU usage and frame rate tracers for the demonstration pipeline detailed above.

  • CPU Usage plot:
GstShark-Eclipse-Tutorial11.png
  • Frame rate plot:
GstShark-Eclipse-Tutorial12.png

Note: In some cases an odd behavior appears on the plots of the results obtained and it is that the graph appears to be empty but really there is only an issue with the time scale of the X axis, therefore it is only needed to do a zoom out and the plot will appear eventually.

Releases


How to reach RidgeRun

Please reach us at support@ridgerun.com for technical questions.