GstWebRTC - GstWebRTC Basics
← |
⌂ Home |
→ |
This page describes the basic features of Ridgerun's GstWebRTC Gstreamer plugin.
What is GstWebRTC?
GstWebRTC is a GStreamer plug-in that turns pipelines into WebRTC compliant endpoints. The plug-in is equipped with three elements:
- GstWebRTCSrc
- GstWebRTCSink
- GstWebRTCBin
These elements allow audio and/or video streaming using the WebRTC protocol.
Why GstWebRTC?
Other WebRTC solutions will automatically detect the video and audio sources, as well as the decoders/encoders and other elements to be used to build the pipeline. This may be convenient for many applications, but result limiting for several other use cases. To mention some of them:
- Extend existing pipeline to support WebRTC streaming
- Use non-standard pipeline configurations
- High performance pipeline tuning for resource critical systems
- Dynamic stream handling in a running pipeline.
- Fine grained pipeline control
- Quick gst-launch prototyping
GstWebRTC was developed based on this criteria. As such, the plug-in is ideal for:
- Embedded platforms
- Existing media servers/applications
- Advanced multimedia solutions
Architecture
Direction
Unidirectional Elements
RidgeRun's GstWebRTCSink and GstWebRTCSrc are used as standard GStreamer sink and source elements respectively. If a pipeline only uses the sink element, it becomes a send-only endpoint, as shown in Figure 1.
Similarly, if a pipeline only uses the source it becomes a receive-only endpoint, as shown in Figure 2.
Finally, by using both unidirectional elements the pipeline behaves as a bidirectional endpoint. Figure 3 show this scenario.
Bidirectional Element
Ridgerun's GstWebRTCBin can be used as a sink and source element. This element can be used as a send-only endpoint, or as a receive-only endpoint or as a send-receive endpoint.
Media Type
Both, the sink and source elements may send/receive audio, video or both simultaneously. The supported capabilities are determined at runtime based on the pads that where requested for the elements. Simply said, if a GstWebRTCSink element was created with a single audio pad, then it will only be capable of sending audio. Similarly, if a GstWebRTCSrc was created with video and audio pads, it will be capable of receiving both medias. Table 1 summarizes the complete set of possible configurations. At the time being, only one pad of each media type can be created.
Element | Configuration | Graphical Description |
---|---|---|
GstWebRTCsink | Audio Only | |
Video Only | ||
Audio+Video | ||
GstWebRTCsrc | Audio Only | |
Video Only | ||
Audio+Video |
← |
⌂ Home |
→ |