GStreamer Debugging

From RidgeRun Developer Wiki

Links

GStreamer debugging approaches

Some of these approaches are only useful when you are running a pipeline and audio and/or video stops at an unexpected place in the data stream.

Use standard GStreamer debug output with filter

To see what debug can be enabled, add --gst-debug-help to your GStreamer application arguments, such as:

gst-launch --gst-debug-help videotestsrc num-buffers=3 ! fakesink

Then you can see which debug you are interested, such as reference counting, and enable all debug output (level output 5).

gst-launch videotestsrc num-buffers=3 ! fakesink --gst-debug=GST_REFCOUNTING:5 --gst-debug-no-color=1  2>&1 | grep "\->0" > log.txt

Gets useful data, but typically slows pipeline performance to the point of being not usable.

Another way you can generate the same output is set the GST_DEBUG shell variable:

GST_DEBUG=GST_REFCOUNTING:5 gst-launch videotestsrc num-buffers=3 ! fakesink

Use gst-tracelib library to log key pipeline behavior

This gst-tracelib library hooks into gstreamer key functions and logs the behavior. When the application exits it displays some general statistics. Further analysis can be done based on the data written to the log file. Detailed usage description in the source code README file.

gst-tracelib logs

  • dataflow, messages, queries and events.
  • caps set|get
  • pipeline topology changes
  • resource usage

Example usage:

export GSTTL_HIDE="caps;chk;topo"
export GSTTL_LOG_SIZE=1048576
AV_FILE=/SD/content/MoMen-dm365.mov

LD_PRELOAD=/usr/lib/gst-tracelib/libgsttracelib.so  filesrc location = $AV_FILE ! qtdemux name=demux ! queue ! dmaidec_h264 numOutputBufs=12 ! \
priority nice=-10 ! queue ! priority nice=-10 ! dmaiperf ! TIDmaiVideoSink accelFrameCopy=true \
videoOutput=DVI videoStd=720P_60 demux.audio_00 ! queue ! priority nice=-5 ! dmaidec_aac ! alsasink

By default, the log file is named /tmp/gsttl.log.

Watch system interrupts

Run the telnet daemon on the target - likely:

/etc/init.d/inetd start

# or

inetd

telnet into the target hardware and watch the interrupt count

while sleep 1 ; do cat /proc/interrupts ; done

If the pipeline is suppose to be running, the changes in the interrupt count may provide clues as to what is going on.

Capturing a core dump

If your GStreamer application is crashing with a seg fault or similar condition, enable saving a core dump before running the application.

ulimit -c 10000
mkdir -m 777 /root/dumps
echo "/root/dumps/%e.core" > /proc/sys/kernel/core_pattern

Once you have a /root/dumps/*.core file, copy it to your and and inspect it with

ddd  -debugger arm-linux-gnueabi-gdb $GSTREAMER_APPLIATION

Then in gdb,

target core <core file>
bt

and see what function caused the core dump.

Use gdb to attach to GStreamer application and examine all the threads

The SDK Debugging Guide provides detailed instructions. I found other helpful gdbserver information as well.

Built your application with symbols (-g) and no optimization (-O0). Use GStreamer libraries that are built with symbols.

Attach to your running GStreamer application using gdbserver

ps
PID=4512 # set the right value based on your application's PID
gdbserver :2345 --attach $PID

Then start the cross compile version of the GNU debugger, like

ddd -debugger arm-linux-gnueabi-gdb $GSTREAMER_APP

and get the gdb debugger connected to gdb server on the target

set solib-absolute-prefix <path to devdir>/fs/fs
file <path to file>
target remote <ip address>:2345

List the threads and do a back trace on each one

info threads
bt
thread 2
bt

Exit locked GStreamer application and see what works stand alone

If you have a GStreamer application that locks up and doesn't run correctly even after you exit the program (possibly with cntl-C) and restart, then it is possible some kernel provided resource is the culprit. For example, if you are using a defective ALSA audio out driver, you might find the GStreamer pipeline locks up in the middle. If you exit the GStreamer application and try a simple audio application, like aplay, you might be able to identify the source of your problem.

Examine a history of what transpired just prior to lockup

If you have a GStreamer application that locks up and can change the pipeline to include a (at this point mythical) recent activity history logger. Such a logger element could be put anywhere in the pipeline. The logger would have circular buffers to keep track of all potentially interesting recent history, such as pad activity, bus activity, and any other relevant information. The circular buffer entries would all be timestamped. When some event occurs (a file exists, a message/signal is received, etc), the element would dump the history, and continue capturing new data.

This idea is after the pipeline locks up, you could cause the history logger to dump it data, and then get an idea of what is suppose to be happening that isn't not occurring.

Look for memory leaks

If you have a GStreamer application that runs for a while, or maybe a long time, and then fails, the problem could be due to a memory leak. Use Valgrind to verify all allocated memory is freed properly.

valgrind --tool=memcheck --trace-children=yes $GSTRAMER_APP

Use remote syslog

If your application logs warnings and errors, use [How to Configure Remote Syslog Logging | remote syslog] to make the information more easily available.

If you application generates output, you can redirect it to syslog as follows:

MY_APP=gst-render

$MY_APP | logger -t "$MY_APP:"