GStreamer Background Subtraction Camera based Motion Detection Plugin
|
Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help. |
Overview Video
Motion Detection Overview
Motion detection algorithms generally work by comparing the incoming video image to a reference image. The reference image could be previous frames or a predefined background. Motion detection is accomplished by analyzing deviations from the reference, and attributing the difference either to the presence of motion or due to noise, such as untended motion on the camera mount.
When the camera is stationary, a common motion detection video approach is to perform background subtraction. With background subtraction, a static scene model is built, which is called the background. Incoming frames are compared to the background in order to detect regions of movement. Many methods exist for background subtraction, with an overview of the most common approaches being described in the Piccardi background subtraction and 2004 IEEE Paper on Background subtraction techniques: a review.
Other motion detection algorithms have been proposed, like Foreground Motion Detection by Difference-Based Spatial Temporal Entropy Image, which uses histograms of the difference between frames to calculate entropy. The magnitude of entropy is used to determine the magnitude of motion.
Approximate median method for background subtraction
From Aresh Saharkhiz: The approximate median method works as such: If a pixel in the current frame has a value larger than the corresponding background pixel, the background pixel is incremented by 1. Likewise, if the current pixel is less than the background pixel, the background is decremented by one. In this way, the background eventually converges to an estimate where half the input pixels are greater than the background, and half are less than the background—approximately the median (convergence time will vary based on frame rate and amount of movement in the scene.)
Turning background subtraction into a motion detective algorithm
A simple approach to turn the approximate median method for background subtraction into a motion detection algorithm is add a counter that is incremented each time a background pixel is changed. Processing multiple frames is required before the background is stable, thus resulting is artificial motion detection reports. Once the background is stable, a noise threshold can be set. When the pixel change count rises above the noise threshold, motion is detected. Once motion is detected, it will typically take multiple frames after the motion stops before the background becomes stable again.
This algorithm requires the camera to be steady as any movement in the camera (such as wind induced vibration) will cause the image to shift, thus necessitating a more complex algorithm to first account for camera movement.
Simplified approximate median method for background subtraction
change_count = 0; bg = bg_frame; f = frame; for (i = frame_size; i > 0; i--) { diff = *bg1 - *f; if (diff > 0) { *bg++; change_count++; } if (diff < 0) { *bg--; change_count++; } bg++; f++; }
Simplified embedded motion detection
To support a motion detection algorithm (MDA) on an ARM processor, approximate median algorithm for background subtraction appears to be the best choice. This MDA was developed, along with the general GStreamer MDA element framework, to allow pipelines to be created that detect motion. If a more complex motion detection algorithm is required, the code architecture can be reused. Implementation of the average distance algorithm for background subtraction MDA along with the general GStreamer MDA element framework is estimated to be 80 hours.
Hardware accelerated motion detection
Depending on the target hardware where the gst-motion-detect GStreamer plugin is intended to be used, it could be adapted to take advantage of the hardware acceleration capabilities of the target hardware. For example on the Jetson TX1 platform the gst-motion-detect plugin could be modified to implement the motion detect algorithm using GPU acceleration. Another example could be to implement the motion detection algorithm on a specific hardware unit such as DSP, M3/M4 ARM cores, etc.
An alternate approach to motion detection is to use the results of the video encode algorithm's motion vector calculation step to determine if any motion is occurring in the video. The motion vector is used in video encoders such as H264. For example, on the TI DM36x processors, you can access the motion vectors as described in the Using MV/SAD information from DM365 encoder in application document. Although the focus of the document relates to encoding MPEG4 or H.264, the necessary technical information to gain access to the motion vector data is explained. No tests on the viability of using video encoder motion vector data for motion have yet been carried out. When such data becomes available, this wiki page will be updated.
If you are interested in a specific hardware accelerated implementation for the gst-motion-detect GStreamer plugin, please feel free to contact us and our engineering team is ready to help you.
Motion detection as part of a GStreamer video capture pipeline
GStreamer is a technology that allows dynamic streaming media pipelines to be easily created. GStreamer consists of hundreds of elements that can be connected in a pipeline. Motion detection can be accomplished by creating a new GStreamer element and including that element in the video pipeline. This allows the motion detection algorithm (MDA) to easily grab video frames as they move through the pipeline. The rate at which the MDA grabs frames can be adjusted to match the amount of CPU bandwidth that is available. The more available bandwidth, the more frames that are processed, and thus the more accurate the motion detection.
The MDA element reports changes in motion detection to the controlling application. The controlling application can take action, such as causing the GStreamer pipeline to start recording or stop recording based on the changes reported by the MDA. The motion detection element can be used with the pre-record element so that the activity that occurred prior to the motion being detected can also be recorded.
This design separates the controlling application logic from the streaming audio/video pipeline. Further the actual motion detection algorithm is loosely coupled to the MDA GStreamer element, so different algorithms can be developed and used without changing the rest of the system. GStreamer even provides a means that the MDA element could be controlled to change the algorithm that is being used without interfering with the streaming pipeline. This might be useful if one low complexity algorithm is used while video is being recorded (when less CPU is available) and a more complex algorithm is used when the system is trying to detect motion.
Gst-Motion-Detect GStreamer plugin
RidgeRun has developed a Motion Detection GStreamer element that is able to detect motion from an incoming video image. The element implements the approximate median method for background subtraction algorithm with adapting background. This method matches other higher-complexity algorithms in performance, while being resilient to constant noise or sudden light changes happening in the scene.
The motion detection element has been developed for GStreamer 1.0 and 0.10. The element runs in any platform (hardware independent), since the motion detection algorithm is executed by the general purpose processor. The gst-motion-detect element is optimized and highly configurable, both for controlling the approximate median algorithm, as well as for minimizing CPU load to obtain the best performance accordingly to the user needs, allowing it to be integrated in highly constrained embedded systems.
Some of the element properties to reduce the CPU consumption are:
- Sample size and location: You can set a rectangular region equal or smaller than the full frame size and locate it everywhere in the frame. The motion detect analysis is only executed in the sample rectangle. The related element properties are: window-x1, window-x2, window-y1, window-y2, sample-width, sample-height.
- Interval frame analysis: Only analyze every nth frame. Related element property: interval.
The motiondetect element generates a start and stop motion signals when it detects movement and when it stops respectively, as it is shown below:
0:02:13.344770492 INFO motiondetect motiondetect.c:147:motiondetect_alarm: 10 ******* SENDING START MOTION SIGNAL ******* 0:02:14.673585873 INFO motiondetect motiondetect.c:157:motiondetect_alarm: 10 ******* SENDING STOP MOTION SIGNAL ******* 0:02:59.538097259 INFO motiondetect motiondetect.c:147:motiondetect_alarm: 10 ******* SENDING START MOTION SIGNAL ******* 0:02:59.702482612 INFO motiondetect motiondetect.c:157:motiondetect_alarm: 10 ******* SENDING STOP MOTION SIGNAL *******
The above is just GStreamer info output when extra debug information has been enabled. The element sends a GStreamer signal that can be routed to the controlling application.
There is an element property allowing the video frame data to be modified making a movement trail visible. This is done by setting motion-trace=true in the element properties configuration pipeline section, so a kind of movement wave can be seen in the displayed video.
The output of gst-inspect below provides technical details about the motion detection element.
Factory Details: Rank none (0) Long-name Motion Detect Element Klass Filter/Analyzing/Video Description Detects motion from video streaming Author Daniel Garbanzo <daniel.garbanzo@ridgerun.com> Plugin Details: Name motiondetect Description Detects motion Filename plugins/.libs/libgstmotiondetect.so Version 1.0.0 License Proprietary Source module gst-motiondetect Binary package gst-motiondetect Origin URL http://www.ridgerun.com/ GObject +----GInitiallyUnowned +----GstObject +----GstElement +----GstBaseTransform +----GstMotiondetect Pad Templates: SINK template: 'sink' Availability: Always Capabilities: video/x-raw format: NV12 width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] SRC template: 'src' Availability: Always Capabilities: video/x-raw format: NV12 width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] Element Flags: no flags set Element Implementation: Has change_state() function: gst_element_change_state_func Element has no clocking capabilities. Element has no URI handling capabilities. Pads: SINK: 'sink' Pad Template: 'sink' SRC: 'src' Pad Template: 'src' Element Properties: name : The name of the object flags: readable, writable String. Default: "motiondetect0" parent : The parent of the object flags: readable, writable Object of type "GstObject" qos : Handle Quality-of-Service events flags: readable, writable Boolean. Default: false enable : Enable motion detection analysis flags: readable, writable Boolean. Default: true sensitivity : Amount of color change required for a pixel to be considered in motion flags: readable, writable Unsigned Integer. Range: 0 - 255 Default: 0 threshold : Percentage of changed pixels in the motion detection window to consider a frame in motion flags: readable, writable Integer. Range: 1 - 100 Default: 1 frames-to-motion : Number of continuous frames surpassing the threshold to emit the start-motion signal flags: readable, writable Integer. Range: 0 - 2147483647 Default: 1 interval : Only analyze every nth frame flags: readable, writable Integer. Range: 1 - 2147483647 Default: 1 frames-offset : Number of frames to ignore before initializing flags: readable, writable Integer. Range: 0 - 2147483647 Default: 0 window-x1 : Upper left corner of the motion detection window within the sample space flags: readable, writable Integer. Range: 0 - 2147483647 Default: 0 window-y1 : Upper left corner of the motion detection window within the sample space flags: readable, writable Integer. Range: 0 - 2147483647 Default: 0 window-x2 : Lower right corner of the motion detection window within the sample space flags: readable, writable Integer. Range: 0 - 2147483647 Default: 0 window-y2 : Lower right corner of the motion detection window within the sample space flags: readable, writable Integer. Range: 0 - 2147483647 Default: 0 sample-width : Sample space width flags: readable, writable Unsigned Integer. Range: 1 - 2147483647 Default: 1 sample-height : Sample space height flags: readable, writable Unsigned Integer. Range: 1 - 2147483647 Default: 1 adapt-speed : Background adaptation speed flags: readable, writable Unsigned Integer. Range: 1 - 5 Default: 1 bg-enable : Enable background stabilization flags: readable, writable Boolean. Default: true bg-frames : Number of stable frames to reach to consider a stable background flags: readable, writable Integer. Range: 0 - 2147483647 Default: 20 bg-timeout : Timeout in seconds to stop detecting a stable background flags: readable, writable Integer. Range: 0 - 2147483647 Default: 2147483647 motion-trace : Enable motion trace visualization (color foreground pixels) flags: readable, writable Boolean. Default: false Element Signals: "bg-timeout" : void user_function (GstElement* object, gpointer user_data); "start-motion" : void user_function (GstElement* object, gint arg0, gpointer user_data); "stop-motion" : void user_function (GstElement* object, gpointer user_data); "bg-stable" : void user_function (GstElement* object, gpointer user_data);
Building the project
To build the project, first you have to install the required dependencies:
sudo apt-get install \ libgstreamer1.0-dev \ libgstreamer-plugins-base1.0-dev \ libgstreamer-plugins-good1.0-dev \ libgstreamer-plugins-bad1.0-dev
After that, proceed to build the package:
./autogen.sh ./configure --libdir=/usr/lib/x86_64-linux-gnu/ # GStreamer will look for its plug-ins in this standard location make sudo make install
The location of the plug-in may vary according to the system. The following table summarizes some standard locations for different setups
System | Libdir |
---|---|
Ubuntu | /usr/lib/x86_64-linux-gnu/ |
Mac OSX (macports) | /opt/local/lib |
Tegra X1/X2 | /usr/lib/aarch64-linux-gnu |
PC gst-motion-detect GStreamer pipelines
In this section you will find some example test pipelines for the PC build of the motiondetect element.
Video capture at 640x480 and motion-detect at full frame
gst-launch-1.0 v4l2src ! "video/x-raw,width=640,height=480,framerate=30/1" ! queue ! videoconvert ! "video/x-raw,format=NV12" ! queue ! motiondetect window-x1=0 window-y1=0 window-x2=639 window-y2=479 sample-width=640 sample-height=480 bg-timeout=30 bg-frames=2 frames-offset=0 frames-to-motion=1 threshold=10 sensitivity=5 enable=true ! videoconvert ! perf ! autovideosink sync=false --gst-debug=motiondetect:4
Videotestsrc at 640x480 and motion-detect at a 320x240 window on the top left corner
This pipeline generate video using videotestsrc and then pass the stream to motiondetect element. Motiondetect element process the video and detects motion in a 320x240 rectangle from the left-upper corner. The output is displayed using autovideosink.
gst-launch-1.0 videotestsrc pattern=ball is-live=true ! "video/x-raw,format=NV12,width=640,height=480,framerate=30/1" ! motiondetect window-x1=0 window-y1=0 window-x2=319 window-y2=239 sample-width=320 sample-height=240 bg-timeout=5 ! videoconvert ! autovideosink --gst-debug=motiondetect:4
Jetson TX1 gst-motion-detect GStreamer pipelines
In this section you will find some example test pipelines and performance measurements of CPU load percentage and frame-rate. The following pipelines are executed on a NVIDIA Jetson TX1 platform using the RidgeRun Jetson TX1 SDK. In all test pipelines the sample rectangle is located in the upper-left corner.
Video capture at 1920x1080 @20fps and motion-detect at full frame (1920x1080) sample
export GST_DEBUG=*motion*:INFO export DISPLAY=:0 CAPS_NVMM='video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \ format=(string)NV12, framerate=(fraction)20/1' CAPS='video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, \ framerate=(fraction)20/1' MDA_PARMS='window-x1=0 window-y1=0 window-x2=1919 window-y2=1079 sample-width=1920 \ sample-height=1080 bg-timeout=30 bg-frames=20 frames-offset=10 \ frames-to-motion=2 threshold=7 sensitivity=7' gst-launch-1.0 -v nvcamerasrc sensor-id=0 fpsRange="30 30" ! $CAPS_NVMM ! \ nvvidconv ! $CAPS ! motiondetect $MDA_PARMS ! videoconvert ! xvimagesink
Performance statistics
The CPU load percentage was measured using tegrastats:
- CPU load average with motiondetect element= 28.8%
- CPU load average without motiondetect element= 26.08%
- CPU load average consumed by motiondetect element= 2.72%
Video capture at 1280x720 @30fps and motion-detect at full frame (1280x720) sample
export GST_DEBUG=*motion*:INFO export DISPLAY=:0 CAPS_NVMM='video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, \ format=(string)NV12, framerate=(fraction)30/1' CAPS='video/x-raw, width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1' MDA_PARMS='window-x1=0 window-y1=0 window-x2=1279 window-y2=719 sample-width=1280 \ sample-height=720 bg-timeout=30 bg-frames=20 frames-offset=10 \ frames-to-motion=3 threshold=5 sensitivity=5' gst-launch-1.0 -v nvcamerasrc sensor-id=0 fpsRange="30 30" ! $CAPS_NVMM ! \ nvvidconv ! $CAPS ! motiondetect $MDA_PARMS ! videoconvert ! xvimagesink
Performance Statistics
The CPU load percentage was measured using tegrastats:
*) CPU load average with motiondetect element= 25.6% *) CPU load average without motiondetect element= 21.05% *) CPU load average consumed by motiondetect element= 4.91%
Video capture at 640x480 @30fps and motion-detect at full frame (640x480) sample
export GST_DEBUG=*motion*:INFO export DISPLAY=:0 CAPS_NVMM='video/x-raw(memory:NVMM), width=(int)640, height=(int)480, \ format=(string)NV12, framerate=(fraction)30/1' CAPS='video/x-raw, width=(int)640, height=(int)480, format=(string)NV12, framerate=(fraction)30/1' MDA_PARMS='window-x1=0 window-y1=0 window-x2=639 window-y2=479 sample-width=640 \ sample-height=480 bg-timeout=30 bg-frames=20 frames-offset=10 \ frames-to-motion=2 threshold=3 sensitivity=6' gst-launch-1.0 -v nvcamerasrc sensor-id=0 fpsRange="30 30" ! $CAPS_NVMM ! \ nvvidconv ! $CAPS ! motiondetect $MDA_PARMS ! videoconvert ! xvimagesink
Performance Statistics
The CPU load percentage was measured using tegrastats:
*) CPU load average with motiondetect element= 25.68% *) CPU load average without motiondetect element= 24.79% *) CPU load average consumed by motiondetect element= 0.89%
Video capture at 320x240 @30fps and motion-detect at full frame (320x240) sample
export GST_DEBUG=*motion*:INFO export DISPLAY=:0 CAPS_NVMM='video/x-raw(memory:NVMM), width=(int)320, height=(int)240, \ format=(string)NV12, framerate=(fraction)30/1' CAPS='video/x-raw, width=(int)320, height=(int)240, format=(string)NV12, framerate=(fraction)30/1' MDA_PARMS='window-x1=0 window-y1=0 window-x2=319 window-y2=239 sample-width=320 \ sample-height=240 bg-timeout=30 bg-frames=20 frames-offset=20 \ frames-to-motion=4 threshold=3 sensitivity=3' gst-launch-1.0 -v nvcamerasrc sensor-id=0 fpsRange="30 30" ! $CAPS_NVMM ! \ nvvidconv ! $CAPS ! motiondetect $MDA_PARMS ! videoconvert ! xvimagesink
Performance Statistics
The CPU load percentage was measured using tegrastats:
*) CPU load average with motiondetect element= 25.55% *) CPU load average without motiondetect element= 25.26% *) CPU load average consumed by motiondetect element= 0.29%
Video capture at 1720x1080 @30fps and motion-detect at 320x240 sample
export GST_DEBUG=*motion*:INFO export DISPLAY=:0 CAPS_NVMM='video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, \ format=(string)NV12, framerate=(fraction)30/1' CAPS='video/x-raw, width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1' MDA_PARMS='window-x1=0 window-y1=0 window-x2=319 window-y2=239 sample-width=320 \ sample-height=240 bg-timeout=30 bg-frames=20 frames-offset=20 \ frames-to-motion=4 threshold=3 sensitivity=3' gst-launch-1.0 -v nvcamerasrc sensor-id=0 fpsRange="30 30" ! $CAPS_NVMM ! \ nvvidconv ! $CAPS ! motiondetect $MDA_PARMS ! videoconvert ! xvimagesink
Performance Statistics
The CPU load percentage was measured using tegrastats:
*) CPU load average with motiondetect element= 21.31% *) CPU load average without motiondetect element= 20.22% *) CPU load average consumed by motiondetect element= 1.09%
See also
- https://courses.engr.illinois.edu/ece420/fa2017/BGS_review.pdf
- https://ieeexplore.ieee.org/document/1414436/
- http://www710.univ-lyon1.fr/~bouakaz/OpenCV-0.9.5/docs/ref/OpenCVRef_Motion_Tracking.htm
- http://www710.univ-lyon1.fr/~bouakaz/OpenCV-0.9.5/docs/ref/OpenCVRef_ImageProcessing.htm
- http://areshmatlab.blogspot.com/2010/05/medium-complexity-background.html
Contact Us
For direct inquiries, please refer to the contact information available on our Contact page. Alternatively, you may complete and submit the form provided at the same link. We will respond to your request at our earliest opportunity.
Links to RidgeRun Resources and RidgeRun Artificial Intelligence Solutions can be found in the footer below.