Using UDP Multicast with GStreamer: Difference between revisions

From RidgeRun Developer Wiki
m (moved Using multicast with Gstreamer to Using UDP Multicast with GStreamer: Correct nerd capitalization and make title more descriptive)
(No difference)

Revision as of 15:36, 11 August 2010

In this document you will find how to create a network connection using multicast in order to transmit audio and/or video streaming.

Audio Multicast Streaming

In this section it will be shown how to build a GStreamer pipe for transmit audio information through a multicast network.

The pipes used are the following

Server:

gst-launch-0.10 filesrc location=<file.mp3> ! mad ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink 
host=<multicast IP address> auto-multicast=true port=<port number>

Client:

gst-launch-0.10 udpsrc multicast-group=<multicast IP address> auto-multicast=true port=<port number> caps="application/x-rtp, 
media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, payload=(int)0, ssrc=(guint)1350777638, 
clock-base=(guint)2942119800, seqnum-base=(guint)47141" ! rtppcmudepay ! mulawdec ! pulsesink

On the server side we first used a filesrc element to set the media audio file we will play (this pipe is only for MP3 audio files), then the file content is passed through a mad decoder in order to get the audio in raw format. Once the audio has been decoded, we pass it through an audioconvert and an audioresample; those convert the audio to raw audio with a sample rate of 8KHz which is the sample rate necessary to decode the audio to mu-law using the mulawdec element.

Before we send the audio through the network it is necessary to package it into a RTP package with the correct payload, that is doing by using rtppcmupay.

Finally the audio packages are sent to the network by using the udpsink element. In order to configure the connection as a multicast type it is necessary to activate the udpsink's multicast compatibility and set the multicast IP address (from 224.x.x.x to 239.x.x.x) and port (from 1024 to 65535).

Once the server starts sending the audio packages, the clients can access the streaming by accessing the multicast IP address to which it is sent. This is doing by using the udpsrc element configured to works in multicast mode with the IP address and port number set before. From this data received it is extracted the RTP packages using the rtppcmudepay element for then decode the mu-law audio and send it to the speakers through the pulsesink (it is possible that your system doesn't support the pulse audio control, in that case you could use alsasink).

Video Multicast Streaming

In this section it will be shown how to build a group of pipelines that allows to run a multicast streaming of a video file. The characteristics of the video are the following:

  • Video package: MP4
  • Video format: H.264
  • Audio format: AAC

The following is the pipeline for the server, which will generate the streaming

gst-launch-0.10 filesrc location=Park_iPod.mp4 ! queue ! qtdemux name=dem dem. ! queue ! rtph264pay ! udpsink host=224.1.1.1 port=5000 auto-multicast=true dem. ! queue !
rtpmp4apay ! udpsink host=224.1.1.1 port=3000 auto-multicast=true

The file is loaded by using the filesrc element, then it is passed to a queue before to be un-packaged by the qtdemux element. The video data is sent to the network through the udpsink element in multicast mode with its correct payload information (done by using the rtph264pay element). The the audio in ACC format is also sent through the network into the same multicast IP group but with different port number.

Since it was sent the video and audio data in different multicast streams, it is necessary to run two different pipelines in your client machine (one for each data stream). You can do it by running the next pipelines in diferent terminal windows or by implementing a C code using threads.

Client video pipeline

gst-launch-0.10 udpsrc multicast-group=224.1.1.1 auto-multicast=true port=5000 caps = 'application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264,
 profile-level-id=(string)429f0d, sprop-parameter-sets=(string)\"Z0KADZZWCg/YC0QAAA+gAAMNQ4EAA4QAABJPj8Y4O0KFXA\\=\\=\\,aMpDUg\\=\\=\", payload=(int)96, ssrc=(guint)927923371, 
clock-base=(guint)1212391300, seqnum-base=(guint)12207' ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink


'Client audio pipeline'

gst-launch-0.10 udpsrc multicast-group=224.1.1.1 auto-multicast=true port=3000 caps = 'application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)MP4A-
LATM, cpresent=(string)0, config=(string)40002320, payload=(int)96, ssrc=(guint)91944086, clock-base=(guint)692337112, seqnum-base=(guint)62914' ! rtpmp4adepay ! faad ! queue !
 audioconvert ! pulsesink

In both cases the streaming of data is received by the udpsrc element configured in multicast mode and set to watch the correct port number. The data is filtered by the corresponding caps and decoded with the H264 or AAC decoder (ffdec_h264 and faad, respectively). Finally the raw data is sent to the desired output.