Using UDP Multicast with GStreamer: Difference between revisions

From RidgeRun Developer Wiki
(15 intermediate revisions by 3 users not shown)
Line 7: Line 7:
The pipes used are the following
The pipes used are the following


Server:
Server (Ubuntu 10.04):


<pre>
<pre>
gst-launch-0.10 filesrc location=<file.mp3> ! mad ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink  
AUDIO_FILE=/opt/media/audio/JB_007.mp3
host=<multicast IP address> auto-multicast=true port=<port number>
MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000
 
gst-launch-0.10 filesrc location="$AUDIO_FILE" ! mad ! audioconvert ! audioresample ! mulawenc ! \
rtppcmupay ! udpsink host=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT
</pre>
</pre>


Client:
Client (Ubuntu 10.04):


<pre>
<pre>
gst-launch-0.10 udpsrc multicast-group=<multicast IP address> auto-multicast=true port=<port number> caps="application/x-rtp" !
MULTICAST_IP_ADDR=224.1.1.1
rtppcmudepay ! mulawdec ! alsasink
AUDIO_UDP_PORT=3000
 
gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT \
caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, \
payload=(int)0, ssrc=(guint)1350777638, clock-base=(guint)2942119800, seqnum-base=(guint)47141" ! \
rtppcmudepay ! mulawdec ! pulsesink
</pre>
</pre>


Line 27: Line 36:
Finally the audio packages are sent to the network by using the ''udpsink'' element. In order to configure the connection as a multicast type it is necessary to activate the udpsink's multicast compatibility and set the multicast IP address (from 224.x.x.x to 239.x.x.x) and port (from 1024 to 65535).
Finally the audio packages are sent to the network by using the ''udpsink'' element. In order to configure the connection as a multicast type it is necessary to activate the udpsink's multicast compatibility and set the multicast IP address (from 224.x.x.x to 239.x.x.x) and port (from 1024 to 65535).


Once the server starts sending the audio packages, the clients can access the streaming by accessing the multicast IP address to which it is sent. This is doing by using the ''udpsrc'' element configured to works in multicast mode with the IP address and port number set before. From this data received it is extracted the RTP packages using the ''rtppcmudepay'' element for then decode the mu-law audio and send it to the speakers through the ''alsasink''.
Once the server starts sending the audio packages, the clients can access the streaming by accessing the multicast IP address to which it is sent. This is doing by using the ''udpsrc'' element configured to works in multicast mode with the IP address and port number set before. From this data received it is extracted the RTP packages using the ''rtppcmudepay'' element for then decode the mu-law audio and send it to the speakers through the ''pulsesink'' (it is possible that your system doesn't support the pulse audio control, in that case you could use ''alsasink'').


== Video Multicast Streaming ==
== Video Multicast Streaming ==
In this section it will be shown how to build a group of pipelines that allows to run a multicast streaming of a video file. The characteristics of the video are the following:
* Video package: MP4
* Video format: H.264
* Audio format: AAC
In the following links you will find the video files tested, the pipelines that will be shown are configured to play 320x240 video resolutions, if you want to play another video resolution you must get a new caps string.
* [http://www.elecard.com/assets/files/other/clips/Park_iPod.mp4 Video test 1]
* [http://www.elecard.com/assets/files/other/clips/Belukha_iPod.mp4 Video test 2]
The following is the pipeline for the server, which will generate the streaming
Server (Ubuntu 10.04):
<pre>
AV=/opt/media/audio/Park_iPod.mp4
MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000
VIDEO_UDP_PORT=5000
gst-launch-0.10 filesrc location=$AV_FILE ! queue ! qtdemux name=dem dem. ! queue ! \
rtph264pay ! udpsink host=$MULTICAST_IP_ADDR port=$VIDEO_UDP_PORT auto-multicast=true dem. ! queue ! \
rtpmp4apay ! udpsink host=$MULTICAST_IP_ADDR port=$AUDIO_UDP_PORT auto-multicast=true
</pre>
The file is loaded by using the ''filesrc'' element, then it is passed to a queue before to be un-packaged by the ''qtdemux'' element. The video data is sent to the network through the ''udpsink''  element in multicast mode with its correct payload information (done by using the ''rtph264pay'' element). The the audio in ACC format is also sent through the network into the same  multicast IP group but with different port number.
Since it was sent the video and audio data in different multicast streams, it is necessary to run two different pipelines in your client machine (one for each data stream). You can do it by running the next pipelines in diferent terminal windows or by implementing a C code using threads.
Client pipelines (Ubuntu 10.04)
<pre>
MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000
VIDEO_UDP_PORT=5000
# Client audio pipeline
gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$VIDEO_UDP_PORT \
caps = 'application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, \
profile-level-id=(string)429f0d, \
sprop-parameter-sets=(string)\"Z0KADZZWCg/YC0QAAA+gAAMNQ4EAA4QAABJPj8Y4O0KFXA\\=\\=\\,aMpDUg\\=\\=\", \
payload=(int)96, ssrc=(guint)927923371, clock-base=(guint)1212391300, seqnum-base=(guint)12207' ! \
rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink &
# Client audio pipeline
gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT \
caps = 'application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)MP4A-
LATM, \
cpresent=(string)0, config=(string)40002320, payload=(int)96, ssrc=(guint)91944086, \
clock-base=(guint)692337112, seqnum-base=(guint)62914' ! rtpmp4adepay ! faad ! queue ! audioconvert ! \
pulsesink &
</pre>
In both cases the streaming of data is received by the ''udpsrc'' element configured in multicast mode and set to watch the correct port number. The data is filtered by the corresponding caps and decoded with the H264 or AAC decoder (ffdec_h264 and faad, respectively). Finally the raw data is sent to the desired output.
[[Category:GStreamer]]

Revision as of 09:33, 21 March 2014

In this document you will find how to create a network connection using multicast in order to transmit audio and/or video streaming.

Audio Multicast Streaming

In this section it will be shown how to build a GStreamer pipe for transmit audio information through a multicast network.

The pipes used are the following

Server (Ubuntu 10.04):

AUDIO_FILE=/opt/media/audio/JB_007.mp3
MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000

gst-launch-0.10 filesrc location="$AUDIO_FILE" ! mad ! audioconvert ! audioresample ! mulawenc ! \
rtppcmupay ! udpsink host=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT

Client (Ubuntu 10.04):

MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000

gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT \
caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, \
payload=(int)0, ssrc=(guint)1350777638, clock-base=(guint)2942119800, seqnum-base=(guint)47141" ! \
rtppcmudepay ! mulawdec ! pulsesink

On the server side we first used a filesrc element to set the media audio file we will play (this pipe is only for MP3 audio files), then the file content is passed through a mad decoder in order to get the audio in raw format. Once the audio has been decoded, we pass it through an audioconvert and an audioresample; those convert the audio to raw audio with a sample rate of 8KHz which is the sample rate necessary to decode the audio to mu-law using the mulawdec element.

Before we send the audio through the network it is necessary to package it into a RTP package with the correct payload, that is doing by using rtppcmupay.

Finally the audio packages are sent to the network by using the udpsink element. In order to configure the connection as a multicast type it is necessary to activate the udpsink's multicast compatibility and set the multicast IP address (from 224.x.x.x to 239.x.x.x) and port (from 1024 to 65535).

Once the server starts sending the audio packages, the clients can access the streaming by accessing the multicast IP address to which it is sent. This is doing by using the udpsrc element configured to works in multicast mode with the IP address and port number set before. From this data received it is extracted the RTP packages using the rtppcmudepay element for then decode the mu-law audio and send it to the speakers through the pulsesink (it is possible that your system doesn't support the pulse audio control, in that case you could use alsasink).

Video Multicast Streaming

In this section it will be shown how to build a group of pipelines that allows to run a multicast streaming of a video file. The characteristics of the video are the following:

  • Video package: MP4
  • Video format: H.264
  • Audio format: AAC

In the following links you will find the video files tested, the pipelines that will be shown are configured to play 320x240 video resolutions, if you want to play another video resolution you must get a new caps string.

The following is the pipeline for the server, which will generate the streaming

Server (Ubuntu 10.04):

AV=/opt/media/audio/Park_iPod.mp4
MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000
VIDEO_UDP_PORT=5000

gst-launch-0.10 filesrc location=$AV_FILE ! queue ! qtdemux name=dem dem. ! queue ! \
rtph264pay ! udpsink host=$MULTICAST_IP_ADDR port=$VIDEO_UDP_PORT auto-multicast=true dem. ! queue ! \
rtpmp4apay ! udpsink host=$MULTICAST_IP_ADDR port=$AUDIO_UDP_PORT auto-multicast=true

The file is loaded by using the filesrc element, then it is passed to a queue before to be un-packaged by the qtdemux element. The video data is sent to the network through the udpsink element in multicast mode with its correct payload information (done by using the rtph264pay element). The the audio in ACC format is also sent through the network into the same multicast IP group but with different port number.

Since it was sent the video and audio data in different multicast streams, it is necessary to run two different pipelines in your client machine (one for each data stream). You can do it by running the next pipelines in diferent terminal windows or by implementing a C code using threads.

Client pipelines (Ubuntu 10.04)

MULTICAST_IP_ADDR=224.1.1.1
AUDIO_UDP_PORT=3000
VIDEO_UDP_PORT=5000

# Client audio pipeline

gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$VIDEO_UDP_PORT \
caps = 'application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, \
profile-level-id=(string)429f0d, \
sprop-parameter-sets=(string)\"Z0KADZZWCg/YC0QAAA+gAAMNQ4EAA4QAABJPj8Y4O0KFXA\\=\\=\\,aMpDUg\\=\\=\", \
payload=(int)96, ssrc=(guint)927923371, clock-base=(guint)1212391300, seqnum-base=(guint)12207' ! \
rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink &

# Client audio pipeline

gst-launch-0.10 udpsrc multicast-group=$MULTICAST_IP_ADDR auto-multicast=true port=$AUDIO_UDP_PORT \
caps = 'application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)MP4A-
LATM, \
cpresent=(string)0, config=(string)40002320, payload=(int)96, ssrc=(guint)91944086, \
clock-base=(guint)692337112, seqnum-base=(guint)62914' ! rtpmp4adepay ! faad ! queue ! audioconvert ! \
pulsesink &

In both cases the streaming of data is received by the udpsrc element configured in multicast mode and set to watch the correct port number. The data is filtered by the corresponding caps and decoded with the H264 or AAC decoder (ffdec_h264 and faad, respectively). Finally the raw data is sent to the desired output.