GstWebRTC - GstWebRTC Basics: Difference between revisions
Jcaballero (talk | contribs) No edit summary |
Jcaballero (talk | contribs) No edit summary |
||
Line 33: | Line 33: | ||
=== Direction === | === Direction === | ||
==== Unidirectional elements==== | |||
RidgeRun's GstWebRTCSink and GstWebRTCSrc are used as standard GStreamer sink and source elements respectively. If a pipeline only uses the sink element, it becomes a send-only endpoint. | RidgeRun's GstWebRTCSink and GstWebRTCSrc are used as standard GStreamer sink and source elements respectively. If a pipeline only uses the sink element, it becomes a send-only endpoint. | ||
<br /> | <br /> | ||
Line 44: | Line 45: | ||
<br /> | <br /> | ||
[[File:Gstwebrtc-sendreceive.png|400px|thumb|center|Figure 3. Pipeline as a WebRTC send-receive endpoint.]] | [[File:Gstwebrtc-sendreceive.png|400px|thumb|center|Figure 3. Pipeline as a WebRTC send-receive endpoint.]] | ||
<br /><br /> | |||
=== Media Type === | === Media Type === | ||
Both, the sink and source elements may send/receive audio, video or both simultaneously. The supported capabilities are determined at runtime based on the pads that where requested for the elements. Simply said, if a GstWebRTCSink element was created with a single audio pad, then it will only be capable of sending audio. Similarly, if a GstWebRTCSrc was created with video and audio pads, it will be capable of receiving both medias. Table 1 summarizes the complete set of possible configurations. At the time being, only one pad of each media type can be created. | Both, the sink and source elements may send/receive audio, video or both simultaneously. The supported capabilities are determined at runtime based on the pads that where requested for the elements. Simply said, if a GstWebRTCSink element was created with a single audio pad, then it will only be capable of sending audio. Similarly, if a GstWebRTCSrc was created with video and audio pads, it will be capable of receiving both medias. Table 1 summarizes the complete set of possible configurations. At the time being, only one pad of each media type can be created. |
Revision as of 16:58, 7 July 2017
← |
⌂ Home |
→ |
This page describes the basic features of Ridgerun's GstWebRTC Gstreamer plugin.
What is GstWebRTC?
GstWebRTC is a GStreamer plug-in that turns pipelines into WebRTC compliant endpoints. The plug-in is equipped with three elements:
- GstWebRTCSrc
- GstWebRTCSink
- GstWebRTCBin
These elements allow audio and/or video streaming using the WebRTC protocol.
Why GstWebRTC?
Other WebRTC solutions will automatically detect the video and audio sources, as well as the decoders/encoders and other elements to be used to build the pipeline. This may be convenient for many applications, but result limiting for several other use cases. To mention some of them:
- Extend existing pipeline to support WebRTC streaming
- Use non-standard pipeline configurations
- High performance pipeline tuning for resource critical systems
- Dynamic stream handling in a running pipeline.
- Fine grained pipeline control
- Quick gst-launch prototyping
GstWebRTC was developed based on this criteria. As such, the plug-in is ideal for:
- Embedded platforms
- Existing media servers/applications
- Advanced multimedia solutions
Architecture
Direction
Unidirectional elements
RidgeRun's GstWebRTCSink and GstWebRTCSrc are used as standard GStreamer sink and source elements respectively. If a pipeline only uses the sink element, it becomes a send-only endpoint.
Similarly, if a pipeline only uses the source it becomes a receive-only endpoint.
Finally, by using the both elements the pipeline behaves as a bidirectional endpoint. Figures 1, 2 and 3 show the scenarios described above, respectively.
Media Type
Both, the sink and source elements may send/receive audio, video or both simultaneously. The supported capabilities are determined at runtime based on the pads that where requested for the elements. Simply said, if a GstWebRTCSink element was created with a single audio pad, then it will only be capable of sending audio. Similarly, if a GstWebRTCSrc was created with video and audio pads, it will be capable of receiving both medias. Table 1 summarizes the complete set of possible configurations. At the time being, only one pad of each media type can be created.
Element | Configuration | Graphical Description |
---|---|---|
GstWebRTCsink | Audio Only | |
Video Only | ||
Audio+Video | ||
GstWebRTCsrc | Audio Only | |
Video Only | ||
Audio+Video |
← |
⌂ Home |
→ |