GStreamer Motion Detection Pipelines

From RidgeRun Developer Wiki



Previous: Examples/Library_Usage Index Next: Performance






This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. 

Video Test

The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.

The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:

gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2  ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420”  !  nv3dsink

Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.


If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.


It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference.

# Use red line color for bounding boxes
export COLOR=0xffff0000

# Use line below to change color to green
# export COLOR=0xff00ff00

# Use line below to change color to blue
# export COLOR=0xff0000ff

gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink


However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:

export COLOR=0xff0000ff
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink


We have defined a background color for the pattern ball to allow for verification that the final video is in color. As you can see, we have added an nvvidconv element to provide NVMM memory in RGBA format, which enables the internal bin conversion to be performed transforming it to system memory, GRAY format. Additionally, the second nvvidconv element is required for the rrmotionoverlay. The bin is providing NVMM memory, but our overlay element only handles system memory. If you were to create an element that processes the motion bounding boxes in NVMM memory, you would not need this conversion.

Motion detection in pattern ball test


Please take a look at the pipeline below, in which the upper half of the image has been defined as the region of interest using the rrmotiondetection element's 'roi' property. You will notice that the bounding box for the ball is only drawn in the selected region, where it is located

gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true motion_detector::roi= "<<(float)0,(float)0,(float)1,(float)0.5>>" ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink

Camera Capture

For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' !  queue !  rrmotiondetectionbin grayscale=true motion_detector::motion=mog2  ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v

Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue !  rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v

To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue !  rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false

Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:

gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin  ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue !  nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4

Recorded File

To analyze the motion objects in a recorded file, you can utilize the following pipelines. Use the first pipeline for color format, and the second for grayscale format:

export FILE=<path to file>
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA !  rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false
Recorded file motion detection in color


export FILE=<path to file>
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::algorithm=mog2  noise_reduction::size=3 ! queue  !  rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false
Recorded file motion detection in gray format


In the grayscale pipeline, you may notice that we've made some changes to the bin's internal element properties. Specifically, we've changed the motion detection algorithm to MOG2 and the noise reduction kernel size to 3. It's important to remember that you can always access and modify the internal element properties to fine-tune and optimize the settings for your specific use case.

Using a recorded file can make it easier to see the roi (region of interest) property in action. This property enables the motion detection to focus solely on the area of interest. For instance, you can set the motion detection to only detect motion in the right half of the image with a pipeline like the following:

gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::roi="<<(float)0.5,(float)0,(float)0.5,(float)1>>"  ! queue  !  rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false


Motion detection in right half ROI

Getting the motion signal

As an alternative to get the motion meta directly from the processed buffers, you can connect to the on-new-motion signal from your application to retrieve the information of the bounding boxes. Here is a simple example using gstd to connect to the on-new-motion signal.

You need to have gstd installed, check that instruction https://developer.ridgerun.com/wiki/index.php/GStreamer_Daemon_-_Building_GStreamer_Daemon if you don’t have it already. Then follow the next steps to create a pipeline and connect to the motion signal:

  • Run gstd as a daemon:
gstd -e
  • Then get into the gstd-client interactive console:
$ gstd-client 
GStreamer Daemon  Copyright (C) 2015-2022 Ridgerun, LLC (http://www.ridgerun.com)
This program comes with ABSOLUTELY NO WARRANTY; for details type `warranty'.
This is free software, and you are welcome to redistribute it
under certain conditions; read the license for more details.
gstd>
  • Create the motion detection pipeline:
gstd> pipeline_create test filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin cuda-blob-detection=false ! perf ! fakesink
  • Start the pipeline
gstd> pipeline_play test
  • Connect to the signal, the command will wait until motion is detected and provide the motion json description.
gstd> signal_connect test bin on-new-motion
{
  "code" : 0,
  "description" : "Success",
  "response" : {
    "name" : "on-new-motion",
    "arguments" : [
        {
            "type" : "GstRrMotionDetectionBin",
            "value" : "(GstRrMotionDetectionBin) bin"
        },
        {
            "type" : "gchararray",
            "value" : "{\"ROIs\":[{\"motion\":[{\"x1\":0.13177083432674408,\"x2\":0.17578125,\"y1\":0.7282407283782959,\"y2\":0.80509257316589355}, 
            {\"x1\":0.62526041269302368,\"x2\":0.6484375,\"y1\":0.62870371341705322,\"y2\":0.75648152828216553}],\"name\":\"roi\",\"x1\":0,\"x2\":1,\"y1\":0,\"y2\":1}]}"
        }
    ]
  }
}



Previous: Examples/Library_Usage Index Next: Performance