GstRtspSink - Basic usage
This page describes the basics of GstRtspSink required to construct streaming pipelines. Each branch in a pipeline describes a different media to stream, they can be arranged to combine multiple streams under the same mapping, or each of them independently.
Requesting pads
The GstRtspSink pads are requested using regular GStreamer notation. For example, using GStreamer textual pipeline notation, attaching three different h.264 encoded video branches to the sink the pipeline looks like the following:
gst-launch-1.0 rtspsink name=sink \ < branch 1 > ! sink. \ < branch 2 > ! sink. \ < branch 3 > ! sink.
Programatically, the pads would be requested as the following:
rtspsinkpad1 = gst_element_get_request_pad(rtspsink, "sink_%d"); gst_pad_link (branch1pad, rtspsinkpad1); rtspsinkpad2 = gst_element_get_request_pad(rtspsink, "sink_%d"); gst_pad_link (branch2pad, rtspsinkpad2); rtspsinkpad3 = gst_element_get_request_pad(rtspsink, "sink_%d"); gst_pad_link (branch3pad, rtspsinkpad3);
Graphically, the pipeline would look something like:
Since we didn't specify the mapping, which is described below, all three pipelines would be mapped to /test.
Combining and Naming Streams
Each stream needs to be mapped to a different URL so the different clients can access them independently. This is called the mapping of the stream. The GstRtspSink element will try to read the mapping from the caps it negotiates with the upstream element. This means that assigning a mapping to a stream is as simple as setting it in the caps, using the mapping= value. The only rule is that the mapping must always start with a leading slash "/". If no mapping is explicitly provided, it will default to /test.
Each mapping will be treated as an individual stream. This means that if you want to combine audio and a video branch within a single A/V stream, it is as simple as using the same mapping name. The following pipeline will produce 3 independent streams: a video mapped to "/video", audio mapped to "/audio" and an audio+video mapped to "/audiovideo".
gst-launch-1.0 rtspsink name=sink \ < branch 1 > ! capsfilter caps="video/x-h264, mapping=/video" ! sink. \ < branch 2 > ! capsfilter caps="audio/mpeg, mapping=/audio" ! sink. \ < branch 3v > ! capsfilter caps="video/x-h264, mapping=/audiovideo" ! sink. \ < branch 3a > ! capsfilter caps="audio/mpeg, mapping=/audiovideo" ! sink.
This will combine branches 3v and 3a as single streaming under the name of "/audiovideo". This can be shown graphically on figure 2.
Supported formats
The following table conveniently summarizes the supported encoding formats and their respective mimetypes for you to build as
gst-launch-1.0 ... ! mimetype, mapping="/test" ! rtspsink
Format | Mimetype |
---|---|
H264 video | video/x-h264 |
H265 video | video/x-h265 |
VP8 video | video/x-vp8 |
VP9 video | video/x-vp9 |
MPEG4 video | video/mpeg video/x-jpeg |
MPEG TS | video/mpegts |
MJPEG video | image/jpeg |
AAC audio | audio/mpeg |
AC3 audio | audio/x-ac3 audio/ac3 |
PCMU audio | audio/x-mulaw |
PCMA audio | audio/x-alaw |
OPUS audio | audio/x-opus |
KLV metadata | meta/x-klv |
Service configuration
RTSP is a high-level protocol that negotiates streaming internals with every client that intends to connect to a mapping. In order to request a stream, the client must connect to the server at a specific port where the RTSP server is up and waiting for connections. The port at which the server is listening, and the clients must talk to is called the service and it is typically set to TCP port 554. RTSP Sink allows to configure the service by means of a GStreamer property as follows:
gst-launch-1.0 audiotestsrc ! avenc_aac ! capsfilter caps="audio/mpeg, mapping=/mystream" ! rtspsink service=3000
If no service is specified then it defaults to 554. Since all ports below 1024 require root priority to use, port 554 could cause the pipeline to fail. Specify a port number 1024 or above, i.e. 3000, to run the pipeline at normal user priority. There are other methods that are possible to avoid the need to have root privileges.