GstWebRTC - GstWebRTCSrc: Difference between revisions
Jcaballero (talk | contribs) No edit summary |
Jcaballero (talk | contribs) No edit summary |
||
Line 27: | Line 27: | ||
<td><br /><img src="http://developer.ridgerun.com/wiki/images/e/e2/Gstwebrtcsrc-audiovideo.png" width=600></img><br /></td> | <td><br /><img src="http://developer.ridgerun.com/wiki/images/e/e2/Gstwebrtcsrc-audiovideo.png" width=600></img><br /></td> | ||
</tr> | </tr> | ||
<caption>Table 1. | <caption>Table 1. GstWebRTCSrc Supported Capabilities | ||
</table> | </table> | ||
</center> | </center> |
Revision as of 17:41, 7 July 2017
← |
⌂ Home |
→ |
This pages gives an overview of the GstWebRTCSrc element.
Architecture
RidgeRun's GstWebRTCSrc is used as a standard GStreamer source element. If a pipeline uses the GstWebRTCSource element, it becomes a receive-only endpoint, as shown in Figure 1.
Media Type
GstWebRTCSrc element may receive audio, video or both simultaneously. The supported capabilities are determined at runtime based on the pads that were requested for the elements. Simply said, if a GstWebRTCSrc was created with video and audio pads, it will be capable of receiving both medias. Table 1 summarizes the complete set of possible configurations. At the time being, only one pad of each media type can be created.
Audio Only | |
Video Only | |
Audio+Video |
← |
⌂ Home |
→ |
}}