Surround View & Driver Assistance

Surround View & Driver Assistance
What is Surround View & Driver Assistance?
By taking video from multiple cameras placed around a vehicle, the system creates a top-down, bird’s-eye view (BEV) of the surroundings. This stitched view helps drivers see blind spots, nearby obstacles, or pedestrians that may not be visible through mirrors or windows—making it easier to park, navigate tight spaces, or avoid collisions.
Real Use Case Scenario
A delivery company equips its vans with a surround view system to help drivers navigate busy city streets and tight loading areas. Each van has four cameras—on the front, rear, and both sides. Instead of viewing each camera separately, the system stitches all views together into a single bird’s-eye view shown on the dashboard.
When the driver pulls into a narrow alley to unload packages, they can see everything around the vehicle at once—cars, walls, curbs, and pedestrians. This makes parking and maneuvering much safer and faster, reducing the risk of accidents or delays during deliveries. The system also alerts the driver if someone or something moves too close to the van.

How can RDS help you build your Surround View and Driver Assistance system?
The RidgeRun Development Suite (RDS) provides an easy way to create a real-time Birds Eye View (BEV) using video from multiple cameras. By combining and projecting these streams into a top-down perspective, RDS enables full visibility around a vehicle—helping to simulate or prototype surround view systems.
This functionality is powered by the following integrated RidgeRun plugin:
- Birds Eye View – for real-time image generation of an aerial (Birds Eye) view.
See RDS in action for Surround View and Driver Assistance
The easiest way to see our products in action is by running the included demo applications. The Birds Eye View demo application is designed to show you how RDS can help you build a Surround View and Driver Assistance. In order to run the demo application, follow these steps:
1. Start rr-media demo application
rr-media
2. Select Birds Eye View from the application menu
Available Plugins 1. Birds Eye View Plugin Select plugin [0/1/2/3/4/5/6/7]: 1
3. Start the demo by selecting Run
▶ Birds Eye View Plugin ┌──────┬──────────────────────────────┐ │ 1 │ Performance monitoring (OFF) │ │ 2 │ Run │ │ 3 │ Back │ │ 4 │ Exit │ └──────┴──────────────────────────────┘
A window showing something like this should appear.

Build your own Surround View and Driver Assistance
1. Start with rr-media API
Now that you saw RDS in action, it's time to build your application. We recommend that you start by using RR-Media API, this will allow you to quickly build your own Proof of concept (POC) with an easy-to-use Python API.
For this, we will need the following RR-Media modules:
- gst.source.file: used to read videos from a file.
- gst.miso.bev: used to combine and project an aerial (Birds Eye) view.
- jetson.sink.video: allows you to display your video on screen.
We will use the ModuleGraph module to build the following graph:

Your Python script should look like this:
from rrmedia.media.core.factory import ModuleFactory from rrmedia.media.core.graph import ModuleGraph # Create graph graph = ModuleGraph() # Directory containing your videos video_dir = "path/to/videos/bev" # Add source files (update with your own videos). graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/cam0.mp4", name="cam0" )) graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/cam1.mp4", name="cam1" )) graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/cam2.mp4", name="cam2" )) graph.add(ModuleFactory.create( "gst.source.file", location=f"{video_dir}/cam3.mp4", name="cam3" )) # Birds Eye View MISO (update with your own calibration file). graph.add(ModuleFactory.create( "gst.miso.bev", calibration_file=f"{video_dir}/calibration.json", name="bev0" )) # Video sink graph.add(ModuleFactory.create( "jetson.sink.video", name="video_sink", extra_latency=110000000 )) # Connect modules graph.connect("cam0", "bev0",0) graph.connect("cam1", "bev0",1) graph.connect("cam2", "bev0",2) graph.connect("cam3", "bev0",3) graph.connect("bev0", "video_sink") # Print pipeline print("Graph pipeline: %s", graph.dump_launch()) # Start playback graph.play() # Start loop (this is a blocking function) graph.loop()
When you run this script, you should see your video as in the demo and the pipeline being used should be printed in console.
2. Build or Customize your own pipeline
RR-Media is designed for easier and testing testing, however, in certain situations, more control is needed so you need to go deeper into the application. In that scenario you have to options:
1. Extend RR-Media to fulfill your needs
2. Build your own GStreamer pipeline.
In this section, we will cover (2). If you want to know how to extend RR-Media, go to RR-MEDIA API.
A good starting point is the GStreamer pipeline obtained while running the RR-Media application. You can use it as your base and start customizing according to your needs.
1. Select your input
When working with GStreamer, it's important to define the type of input you're using—whether it's an image, video file, or camera. Here are some examples:
For example, in the case of the MP4 video called <MP4_FILE>:
INPUT="filesrc location=<MP4 file> ! qtdemux ! h264parse ! decodebin ! queue "
For a camera using NVArgus with a specific sensor ID <Camera ID>:
INPUT="nvarguscamera sensor-id=<Camera ID> ! nvvidconv ! queue "
2. Birds Eye View Setup
After defining the input, configure birds eye view with your calibration JSON file to combine the projections in an aerial view.
BEV="bev name=bev0 calibration-file=<CALIBRATION JSON>"
3. Output Options
You can choose how you want the output to be handled—whether you want to stream, display, or save the video.
To stream using RTSP with the desired <PORT>
OUTPUT="nvv4l2h264enc ! h264parse ! video/x-h264, stream-format=avc, mapping=stream1 ! rtspsink service=<PORT> async-handling=true"
To display the output locally:
OUTPUT="DISP="nvvidconv ! queue leaky=downstream ! nveglglessink"
5. Final Pipeline
Finally, you can connect all components using gst-launch or GStreamer Daemon (GSTD) here is how you would do it with GSTD:
gstd & gstd-client pipeline_create p1 $BEV ! $INPUT_1 ! nvvidconv ! queue ! bev0.sink_0 \ $INPUT_2 $PROJECTOR ! nvvidconv ! queue ! bev0.sink_1\ .... bev0. ! $OUTPUT
Run the pipeline with the following GSTD command:
gstd-client pipeline_play p1
And then you can stop it with the following GSTD command:
gstd-client pipeline_stop p1
Extend it Further
You can use GstInterpipe to link the stitched output into other modular pipelines for analytics, AI, or cloud storage.