Remote Video Monitoring and Streaming
The RidgeRun Development Suite documentation for RidgeRun is presently being developed. |

Remote Video Monitoring and Streaming
What is Remote Video Monitoring and Streaming?
Remote video monitoring is the ability to view live video from cameras over a network—whether it's on-site or from a distant location. The video captured by a camera can be streamed in real time to other devices like computers, smartphones, or cloud servers. This allows people to watch and record video feeds without needing to be physically near the cameras.
Real Use Case Scenario
A construction company wants to keep an eye on multiple job sites from their main office. Each site has cameras that record what's happening throughout the day. Instead of visiting each location, the video is streamed over the internet to the headquarters, where managers can watch live feeds from a central dashboard.
At the same time, the video is also saved to a secure server in the cloud for future review. If something happens on-site—like an accident or equipment issue—the team can quickly check the live or recorded video to respond properly, helping improve safety and accountability.

How can RDS help you build your Remote Video Monitoring and Streaming system?
The RidgeRun Development Suite (RDS) enables seamless remote monitoring by integrating key components such as dynamic pipeline switching thanks to GstInterpipe and powerup with Gstreamer Daemon (GSTD).
This functionality is powered by a set of integrated RidgeRun plugins, including:
- GstRtspSink - for RTSP streaming.
- GStreamer Daemon - for building your own media server
- GstInterpipe - for dynamically switch different pipelines
See RDS in action for Remote Video Monitoring and Streaming
The easiest way to experience our products in action is by running the included demo applications. The Gstd Dynamic Source Switcher and the Dynamic Source Switcher are designed to demonstrate how RDS can assist you in building a remote video monitoring and streaming solution.
Both demos showcase how the input source changes periodically. To run the demo applications, follow these steps:
0. Start GSTD only for Gstd Dynamic Source Switcher
gstd &
1. Start RR-Media demo application
rr-media
2. Select Gstd Dynamic Source Switcher or Dynamic Source Switcher from the application menu
Available Plugins 2. Gstd Dynamic Source Switcher Select plugin [0/1/2/3/4/5/6/7]: 2
Available Plugins 3. Dynamic Source Switcher Select plugin [0/1/2/3/4/5/6/7]: 3
3. Start the demo by selecting Run
▶ Gstd Dynamic Source Switcher ┌──────┬──────────────────────────────┐ │ 1 │ Performance monitoring (OFF) │ │ 2 │ Run │ │ 3 │ Back │ │ 4 │ Exit │ └──────┴──────────────────────────────┘
A window showing something like this should appear.
![]() | anonymous: Add gif for this example (please remove this box when addressed) |
Build your own Remote Video Monitoring and Streaming System
![]() | anonymous: add links to RR-media API documentation (please remove this box when addressed) |
1. Start with RR-Media API
Now that you saw RDS in action, it's time to build your application. We recommend that you start by using RR-Media API, this will allow you to quickly build your own Proof of concept (POC) with an easy-to-use Python API {ADD LINK TO RR-MEDIA API DOCUMENTAION}.
For this, we will need the following RR-Media modules:
- gst.source.file: used to read videos from a file.
- gst.source.interpipe & gst.sink.interpipe: used for interconnection of pipelines.
- jetson.sink.video: allows you to display your video on screen.
We will use the ModuleGraph module to build the following graph:

Your Python script should look like this:
from pygstc.gstc import GstcError, GstdClient, GstdError from rrmedia.media.core.factory import ModuleFactory from rrmedia.media.core.graph import ModuleGraph #Call GSTD client gstc = GstdClient() # Directory containing your videos video_dir = "path/to/videos/samples" # Create the graphs required, for this example will be the following graph0 = ModuleGraph() graph1 = ModuleGraph() graph2 = ModuleGraph() ## Design of graph 0 # Create source for graph 0 graph0.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/sample0.mp4", name="input0" )) # Create the interpipesink for graph 0 graph0.add(ModuleFactory.create( "gst.sink.interpipe", sync=True, name="intersink0" )) # Connect the modules for graph 0 graph0.connect("input0", "intersink0") ## Design of graph 1 # Create source for graph 1 graph1.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/sample1.mp4", name="input1" )) # Create the interpipesink for graph 1 graph1.add(ModuleFactory.create( "gst.sink.interpipe", sync=True, name="intersink1" )) # Connect the modules for graph 1 graph1.connect("input1", "intersink1") ## Design of graph 2 # Create the interpipesrc for graph 2 graph2.add(ModuleFactory.create( "gst.source.interpipe", listen_to="intersink0", name="intersrc0" )) # Video sink for graph 2 graph2.add(ModuleFactory.create( "jetson.sink.video", name="video_sink", extra_latency=110000000 )) # Connect the modules for graph 2 graph2.connect("intersrc0", "video_sink") # Create the pipelines: gstc.pipeline_create("input_1", graph0.dump_launch()) gstc.pipeline_create("input_2", graph1.dump_launch()) gstc.pipeline_create("output_1", graph2.dump_launch()) # Print pipeline print("Graph pipeline 1: %s", graph0.dump_launch()) print("Graph pipeline 2: %s", graph1.dump_launch()) print("Graph pipeline 3: %s", graph2.dump_launch()) # Run the pipelines gstc.pipeline_play("input_1") gstc.pipeline_play("input_2") gstc.pipeline_play("output_1") # Control the input and the output gstc.element_set("output_1", self.__selector_name, "listen-to", "input_2")
When you run this script, you should see your video as in the demo and the pipeline being used should be printed in console.
2. Build or Customize your own pipeline
RR-Media is designed for easier and testing testing, however, in certain situations, more control is needed so you need to go deeper into the application. In that scenario you have to options:
1. Extend RR-Media to fulfill your needs
2. Build your own GStreamer pipeline.
In this section, we will cover (2). If you want to know how to extend RR-Media, go to {ADD LINK}.
A good starting point is the GStreamer pipeline obtained while running the RR-Media application. You can use it as your base and start customizing according to your needs.
1. Select your input
When working with GStreamer, it's important to define the type of input you're using—whether it's an image, video file, or camera. Here are some examples:
For example, in the case of the MP4 video called <MP4_FILE>:
INPUT="filesrc location=<MP4 file> ! qtdemux ! h264parse ! decodebin ! queue "
For a camera using NVArgus with a specific sensor ID <Camera ID>:
INPUT="nvarguscamera sensor-id=<Camera ID> ! nvvidconv ! queue "
2. Interpipesink Setup
After defining the input you will need to define the interpipesink element for each input.
INTPERPIPESINK=" interpipesink sync=true name=interpipesink_0 "
3. Interpipesrc
Next, create the interpipesrc for the output.
INTERPIPESRC="interpipesrc listen-to=interpipesink_0 name=interpipesrc1 ! queue !"
4. Output Options
You can choose how you want the output to be handled—whether you want to stream, display, or save the video.
To stream using RTSP with the desired <PORT>
OUTPUT="nvv4l2h264enc ! h264parse ! video/x-h264, stream-format=avc, mapping=stream1 ! rtspsink service=<PORT> async-handling=true"
To display the output locally:
OUTPUT="DISP="nvvidconv ! queue leaky=downstream ! nveglglessink"
5. Final Pipeline
Finally, you can connect all components using gst-launch or Gstreamer Daemon (GSTD), which also allows control which source you want to see.
gstd & gstd-client pipeline_create p1 $INPUT_1 ! $INTERPIPESINK_1 gstd-client pipeline_create p1 $INPUT_2 ! $INTERPIPESINK_2 gstd-client pipeline_create o1 $INTERPIPESRC ! $OUTPUT
Then select the source with GSTD as follows.
gstd-client "element_set o1 interpipesrc1 listen-to interpipesink_1