https://developer.ridgerun.com/wiki/api.php?action=feedcontributions&user=Mmontero&feedformat=atomRidgeRun Developer Wiki - User contributions [en]2024-03-29T14:23:40ZUser contributionsMediaWiki 1.40.0https://developer.ridgerun.com/wiki/index.php?title=RidgeRun_Image_Projector/User_Guide/Rrprojector&diff=51022RidgeRun Image Projector/User Guide/Rrprojector2023-10-09T20:57:22Z<p>Mmontero: Created page with "<noinclude> {{RidgeRun_Image_Projector/Head|previous=User Guide|next=User Guide/Rrprojector|metakeywords=Image Stitching, CUDA, Stitcher, Projector, Equirectangular Projection, 360}} </noinclude> {{DISPLAYTITLE:RidgeRun Projector Plugin|noerror}} The RidgeRun Projector is a GStreamer plugin intended to contain elements that perform video projections, transforming images into new ones using a mapping relationship. Currently, the plugin only contains one element: rreqrpr..."</p>
<hr />
<div><noinclude><br />
{{RidgeRun_Image_Projector/Head|previous=User Guide|next=User Guide/Rrprojector|metakeywords=Image Stitching, CUDA, Stitcher, Projector, Equirectangular Projection, 360}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:RidgeRun Projector Plugin|noerror}}<br />
<br />
The RidgeRun Projector is a GStreamer plugin intended to contain elements that perform video projections, transforming images into new ones using a mapping relationship. Currently, the plugin only contains one element: rreqrprojector which transforms a fisheye image into an equirectangular projected image. <br />
<br />
== RidgeRun Equirectangular Projector ==<br />
<br />
RidgeRun Equirectangular Projector is a single input/single output GStreamer element named rreqrprojector. As its name states it will transform an input image into an equirectangular projection, specifically the input image should be a fisheye image. <br />
<br />
The required mapping to transform the input image to equirectangular is determined by the element’s properties; you must ensure that the properties correctly describe your fisheye lens and camera orientation to have an accurate equirectangular image. See the properties description below.<br />
<br />
This element uses CUDA to accelerate the transformation, specifically inheriting from RidgeRun’s base class CudaBaseFilter of the project gst-cuda. Since it is using CUDA, the element has special memory requirements, it can only handle CUDA or NVMM memory, however, the element proposes an allocator to upstream elements to provide CUDA memory, so most of the elements will be able to connect easily to rreqrprojector. Also, currently, the element only supports RGBA format at its src and sink pads.<br />
<br />
As you know for an equirectangular image, the width:height relationship is 2:1, so the element’s default caps negotiation is going to keep that relationship, maintaining the input height and calculating the output width to be 2 times the height. However, the element can handle any resolution that you define at the cost of data appearance distortion.<br />
<br />
=== Properties ===<br />
<br />
*'''center-x:''' An integer property that defines the center position of the fisheye circle over the input’s image X axis(horizontal). If 0 automatic center is calculated as half of the input image.<br />
<br />
*'''center-y:''' An integer property that defines the center position of the fisheye circle over the input’s image Y axis(vertical). If 0 automatic center is calculated as half of the height image.<br />
<br />
*'''crop:''' A boolean property that states whether the output image should be cropped to half when the lens aperture is smaller or equal to 180 degrees. The objective of this property is to reduce the required memory to store the resultant equirectangular projection. As you know if the lens aperture is 180 or less, the right half of the image will be black and will not provide useful information so cropping it to half will save unused memory and save some processing time. The previous statement is correct if you maintain the rot-z property to 0 (used for stitching), if you modify this angle you may lose information when cropping the image.<br />
<br />
*'''lens:''' A float property that defines the fisheye lens aperture.<br />
<br />
*'''radius:''' An integer property that defines in pixels the radius of the circle containing the fisheye image.<br />
<br />
*'''rot-x:''' A float property that defines the camera’s tilt angle correction in degrees between -180 and 180. This is assuming a coordinate system over the camera where the X axis looks over the side of the camera, and you rotate that axis, rotating the camera up and down. This property should be used to correct the tilt angle of the camera, but you must set the actual camera rotation not the correction needed. If your camera is looking a little bit up over the horizon plane you must set a positive value indicating the angle at this camera is looking up, on the other hand, if your camera is looking down a little you must set a negative value indicating that angle to restore the center of the camera to the horizon. <br />
<br />
<br />
*'''rot-y:''' A float property that defines the camera’s roll angle correction in degrees between -180 and 180. This is assuming a coordinate system over the camera where the camera lens looks over the Y axis, if you rotate that axis, the camera will rotate rolling over its center. This property should be used to correct the roll angle of the camera, but you must set the actual camera rotation not the correction needed. Meaning if your camera image is inclined, a little bit to the left, like a rotated clock-counter wise you must set a negative value to indicate this rotation, and the projector will set the image straight. On the other hand, if your image is inclined to the right you must rotate it clock-counter wise to contrarest but you must set the positive value so the projector knows the camera has inclined that angle clockwise. <br />
<br />
*'''rot-z:''' A float property that defines the camera’s pan angle correction in degrees between -180 and 180. This is assuming a coordinate system over the camera where the Z axis is over the camera, if you rotate that axis, the camera will pan over its center moving around the 360 horizon. This property can be used for stitching to distribute the cameras along the longitude axis in conjunction with identity homographies, since the rotation will take care of the required translation. On the other hand if you want to use the crop property is recommended to use this rotation in 0 and make the required translations with the homography. <br />
<br />
=== Sample Pipeline ===<br />
<br />
The following pipelines read a fisheye JPEG image, decode it and project it to the equirectangular equivalent using default properties values. The result is saved to a jpeg image.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 filesrc location=fisheye180.jpeg ! jpegdec ! videoconvert ! queue ! rreqrprojector ! videoconvert ! queue ! jpegenc ! filesink location=proj0.jpeg<br />
</syntaxhighlight><br />
<br />
<noinclude><br />
{{RidgeRun_Image_Projector/Foot|User Guide/Calibration|Examples}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=RidgeRun_Image_Projector/RidgeRun_Image_Projector_Basics&diff=51021RidgeRun Image Projector/RidgeRun Image Projector Basics2023-10-09T20:21:37Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{RidgeRun Image Projector/Head|previous=|next=RidgeRun Image Projector Overview|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
{{DISPLAYTITLE:About 360 Video|noerror}}<br />
<br />
<br />
The RidgeRun’s 360 stitcher is capable of creating a 360 frame from a group of fisheye cameras. The camera’s FOV must cover the 360 region around the cameras, the minimum should be 2 cameras back to back with 180 degrees of FOV.<br />
<br />
The resultant 360 frame is nothing else than an equirectangular projection of the world around the cameras, a planar representation of the spherical world coordinates. The most common equirectangular projection in our lives is the world map, where the projection maps the constant spacing meridians to vertical straight lines of constant spacing and the constant spacing circles of latitude to horizontal straight lines of constant spacing.<br />
<br />
[[File:World-360.jpg|600px|thumb|center]]<br />
[[File:World-projected.png|600px|thumb|center]]<br />
[[File:World-mapping.png|600px|thumb|center]]<br />
<br />
As you can infer from the world map above, the 360 image consists of 360 degrees of longitude in the horizontal direction and 180 degrees in the vertical. So the 360 stitcher provides an equirectangular image with an aspect ratio of 2:1. As shown in the following image:<br />
<br />
[[File:equirectangular-rr.png|800px|thumb|center]]<br />
<br />
In the 360 image/equirectangular panoramic image all the verticals remain vertical and the horizon becomes a straight line across the middle of the image. Also the coordinates in the image relate linearly to pan and tilt angles in the real world. Making the poles to be located at the top and bottom sides of the image, stretched to the entire image width, and the areas near the poles get stretched horizontally.<br />
<br />
In the case of the cameras, each camera's equirectangular projection would generate a portion of the 360 image projection depending on the lenses’ specifications, the equirectangular projection section below will get into more details about it. <br />
<br />
In general, the 360 stitcher full process follows the next steps: capture from an array of fisheye cameras that covers the 360 degrees longitude, projection of each fisheye image to an equirectangular image and stitching of the equirectangular projections to create a blended 360 image.<br />
<br />
== Fisheye Lenses ==<br />
<br />
For 360 stitching so far only cameras with fisheye lenses are supported. Fisheye lens is a wide-angle lens with a distortion that allows a small number of cameras to cover the 360 world around the cameras. The fisheye cameras produce images with a circular scenario and black borders as shown below. <br />
<br />
[[File:fisheye.jpg|600px|thumb|center]]<br />
<br />
However, depending on the sensor that you are using and the capture configuration, you may receive fisheye images cropped as shown below, having just a partial section of the fisheye circle. The 360 stitcher will be able to handle those types of images as well, just need to set the appropriate parameters (described later) and remember that your final 360 frame will have missing information around the poles. <br />
<br />
[[File:fisheye-rr.jpg|600px|thumb|center]]<br />
<br />
For the stitcher, the fisheyes images are going to be considered as circular fisheyes where the longitude is radially symmetrical and the latitude proportional to the radius from the center of the fisheye circle. Most lenses will have a distortion that differs from the previous ideal description but for now, that distortion will be ignored.<br />
<br />
[[File:fisheye-overlay.png|600px|thumb|center]]<br />
<br />
The most important features of the fisheye image for the stitcher are: aperture, radius and center. <br />
<br />
*'''Aperture:''' indicates the angle of view of the fisheye lens. Fisheye lenses can capture 180 degrees and up.<br />
*'''Radius:''' is the radius of the fisheye circle that matches the aperture.<br />
*'''Center:''' is the center of the fisheye circle <br />
<br />
[[File:fisheye-parameters.png|600px|thumb|center]]<br />
<br />
<br />
== Equirectangular Projection ==<br />
<br />
Even though the fisheye only covers a limited angle of view of the real world, the corresponding equirectangular projection image corresponds to the 360 degrees but will have black areas.<br />
<br />
The relationship between the fisheye image and the equirectangular projection can be determined geometrically, we are not going to get into details, but you can visualize it in the figures below. As you can see, the region projected in the equirectangular frame will change according to the aperture. For the equirectangular projection of a fisheye of 180 degrees or less half of the image is totally black and doesn’t provide any relevant information, so that memory could be discarded to save resources.<br />
<br />
[[File:Equirectangular-projections.png|400px|thumb|center]]<br />
<br />
You need to know that a fisheye to equirectangular projector converts the fisheye image where the real-world verticals are curved to actual vertical lines in the equirectangular image and the horizon is converted to an horizontal line. If the resultant equirectangular projection doesn’t look like this you should adjust the projector configuration to accomplish these results.<br />
<br />
[[File:Equirectangular-195-overlay.png|800px|thumb|center]] <br />
<br />
In a real-world scenario, the fisheye lenses will not be located on the same optical axis and will not have the same characteristics. So to adjust the equirectangular projection to this imperfect setup, you need to define to the projector the fisheye lens characteristics: aperture, radio, and center. Also, you can configure the fisheye rotations to adjust the optical axis, it is assumed that the camera is looking over the Y axis, so the Y rotation works to roll the fisheye lens as if you grab the lens front and rotate your hand. The X-axis is at the side of the camera, so if you change the rotation over this axis you can correct the fisheye tilt angle. And finally, the Z axis is up over the camera and works to pan the fisheye camera over the horizon.<br />
<br />
<noinclude><br />
{{RidgeRun Image Projector/Foot||RidgeRun Image Projector Overview}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Xavier/JetPack_5.0.2/Compiling_Code&diff=48749Xavier/JetPack 5.0.2/Compiling Code2023-04-05T21:14:00Z<p>Mmontero: /* Build the Kernel, Modules, and DTB */</p>
<hr />
<div><noinclude><br />
{{Xavier/Head|previous=JetPack 5.0.2/Getting_Started/Components|next=JetPack 5.0.2/Flashing Board|metakeywords=jetpack,compiling,compile,building}}<br />
<br />
{{DISPLAYTITLE:NVIDIA Jetson Xavier - Compile and Build JetPack 5.0.2 |noerror}}<br />
<br />
Learning to build the BSP and OS components enables developers to customize the software for their specific project needs.<br />
<br><br />
<br />
<br />
This section will guide you through the process of building the BSP and OS components for the NVIDIA Jetson Xavier. For this section you are going to need Jetpack 5.0.2 for Xavier installed on your host computer, we assume that you got this installed following through our Getting Started and Installing Jetpack sections.<br />
<br><br />
<br />
<!--<br />
We divide this section into the following components:<br />
{{Review|fix links|fherrera}}<br />
# [[NVIDIA Jetson Orin/JetPack_5.0/Compiling_Code/Kernel | Build the kernel, modules and dtb]]<br />
# [[NVIDIA Jetson Orin/JetPack_5.0/Compiling_Code/Bootloader | Build the bootloader]]<br />
# [[NVIDIA Jetson Orin/JetPack_5.0/Compiling_Code/Bootloader | Customize the Root Filesystem]]<br />
--><br />
<br />
<br />
==Assumptions==<br />
# The build of the BSP/OS components source code will be performed by cross-compiling on a host computer running a Linux OS.<br />
# The host computer used for the cross-compilation has Jetpack 5.0.2 (L4T 35.1) for Xavier installed. The Jetpack path is: <syntaxhighlight lang=bash> /home/$USER/nvidia/nvidia_sdk/JetPack_5.0.2_Linux_JETSON_XAVIER_NX_TARGETS/Linux_for_Tegra </syntaxhighlight> This is the default path where the SDK Manager installs the Jetpack 5.0.2 tools for the Xavier. If you chose to install Jetpack on a different path, make sure to adjust the instructions accordingly.<br />
# Make sure the following dependencies are installed on your system:<br />
** wget<br />
** lbzip2<br />
** build-essential<br />
** bc<br />
** zip<br />
** libgmp-dev<br />
** libmpfr-dev<br />
** libmpc-dev<br />
** vim-common # For xxd<br />
<br />
In Debian based systems you can run the following:<br />
<br />
<syntaxhighlight lang=bash><br />
sudo apt install wget lbzip2 build-essential bc zip libgmp-dev libmpfr-dev libmpc-dev vim-common<br />
</syntaxhighlight><br />
<br />
== Define the Environment Variables ==<br />
<br />
Open a terminal and run the following commands to export the environment variables that will be used in the next steps:<br />
<br />
<syntaxhighlight lang=bash><br />
export TOOLCHAIN_SRC=bootlin-toolchain-gcc-93 <br />
export TOOLCHAIN_DIR=gcc-9.3-glibc-2.31<br />
export KERNEL_SRC=l4t-sources-35-1<br />
export KERNEL_DIR=kernel-5.10<br />
export CROSS_COMPILE=$HOME/l4t-gcc/bin/aarch64-buildroot-linux-gnu-<br />
export JETPACK=$HOME/nvidia/nvidia_sdk/JetPack_5.0.2_Linux_JETSON_XAVIER_NX_TARGETS/Linux_for_Tegra<br />
export KERNEL_OUT=$JETPACK/images<br />
export KERNEL_MODULES_OUT=$JETPACK/images/modules<br />
</syntaxhighlight><br />
<br />
== Get the Toolchain ==<br />
<br />
If you haven't already, download the toolchain. The toolchain is the set of tools required to cross-compile the Linux kernel. You can get the toolchain running the following snippet in the terminal:<br />
<br />
<syntaxhighlight lang=bash><br />
cd $HOME<br />
mkdir -p $HOME/l4t-gcc<br />
cd $HOME/l4t-gcc<br />
<br />
# Reuse existing download, if any<br />
if ! test -e ${TOOLCHAIN_SRC}.tar.gz; then <br />
wget -O ${TOOLCHAIN_SRC}.tar.gz https://developer.nvidia.com/embedded/jetson-linux/bootlin-toolchain-gcc-93<br />
tar -xf ${TOOLCHAIN_SRC}.tar.gz<br />
fi<br />
</syntaxhighlight><br />
<br />
Please note that as a new kernel version becomes available, it might be necessary to use updated toolchain versions as well.<br />
<br><br />
<br />
<br />
== Download BSP sources ==<br />
<br />
The BSP sources are provided by NVIDIA and include the kernel, modules, and device tree source files. At RidgeRun we often need to modify and rebuild these sources to develop our customer's projects. You can use one of the two following methods to obtain the BSP sources:<br />
<br />
=== Method #1: Download via the sources_sync script ===<br />
<br />
NVIDIA provides a script to clone the BSP sources, this script is included in Jetpack. To download kernel sources, execute the following commands in a terminal:<br />
<br />
<syntaxhighlight lang=bash><br />
cd $JETPACK<br />
./source_sync.sh -k jetson_35.1<br />
</syntaxhighlight><br />
<br />
=== Method #2: Download via the NVIDIA web page ===<br />
The BSP sources can be downloaded from the NVIDIA git server. Consult section 4.1 of the [https://docs.nvidia.com/jetson/archives/r35.1/ReleaseNotes/Jetson_Linux_Release_Notes_r35.1.pdf release notes] for information about the specific URLs containing the source code.<br />
<br />
== Build the Kernel, Modules, and DTB ==<br />
This subsection will guide you through the steps of building the BSP sources to generate the kernel Image, the external modules, and the device tree blob.<br />
<br />
1. Create the directories for the build outputs:<br />
<syntaxhighlight lang=bash><br />
mkdir -p $KERNEL_MODULES_OUT<br />
</syntaxhighlight><br />
<br />
2. Clean the environment:<br />
<syntaxhighlight lang=bash><br />
cd $JETPACK/sources/kernel/$KERNEL_DIR<br />
make mrproper<br />
</syntaxhighlight><br />
<br />
3. Setup the default configuration:<br />
<syntaxhighlight lang=bash><br />
make ARCH=arm64 O=$KERNEL_OUT tegra_defconfig<br />
</syntaxhighlight><br />
<br />
{{Review|error here|fherrera}}<br />
4. Optionally, you can customize the kernel configuration by modifying the settings in the menuconfig. The menuconfig can be opened by running the following command:<br />
<syntaxhighlight lang=bash><br />
make ARCH=arm64 O=$KERNEL_OUT menuconfig<br />
</syntaxhighlight><br />
<br />
<br />
5. Build the BSP:<br />
<syntaxhighlight lang=bash><br />
make ARCH=arm64 O=$KERNEL_OUT CROSS_COMPILE=$CROSS_COMPILE -j4 LOCALVERSION="-tegra"<br />
</syntaxhighlight><br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Note: the number next to the -j flag in the previous command tells make how many recipes to execute simultaneously. You can increase this value for the kernel build to finish faster but using more system resources.<br />
|style=width:unset;<br />
}}<br />
<br />
<br />
6. Install the modules to the output directory created in step 1:<br />
<syntaxhighlight lang=bash><br />
make modules_install ARCH=arm64 O=$KERNEL_OUT CROSS_COMPILE=$CROSS_COMPILE INSTALL_MOD_PATH=$KERNEL_MODULES_OUT LOCALVERSION="-tegra"<br />
</syntaxhighlight><br />
<br />
<br />
7. Backup the binaries that come installed by default in Jetpack:<br />
<syntaxhighlight lang=bash><br />
BKUP_DATE=`date "+%Y_%m_%d_%H_%M_%S"`<br />
mv $JETPACK/kernel/Image{,.$BKUP_DATE} <br />
mv $JETPACK/kernel/kernel_supplements.tbz2{,.$BKUP_DATE}<br />
mv $JETPACK/kernel/dtb{,.$BKUP_DATE}<br />
</syntaxhighlight><br />
<br />
<br />
8. Copy the binaries built to the default locations expected by the flashing tool:<br />
<syntaxhighlight lang=bash><br />
cd $KERNEL_OUT<br />
cp ./arch/arm64/boot/Image $JETPACK/kernel/<br />
cp -r ./arch/arm64/boot/dts $JETPACK/kernel/dtb<br />
</syntaxhighlight><br />
<br />
<br />
9. Update the kernel modules in the kernel supplements tarball:<br />
<syntaxhighlight lang=bash><br />
cd $KERNEL_MODULES_OUT<br />
tar --owner root --group root -cjf $JETPACK/kernel/kernel_supplements.tbz2 lib/modules<br />
</syntaxhighlight><br />
<br />
<br />
10. Install tegra binaries:<br />
<syntaxhighlight lang=bash><br />
cd $JETPACK<br />
sudo ./apply_binaries.sh<br />
</syntaxhighlight><br />
<br />
At this point you have the kernel Image, external modules and dtb built and ready to flash.<br />
===Known Issues===<br />
Failure when running<br />
<pre><br />
make mrproper <br />
</pre><br />
On Ubuntu 20 you might get an error saying you should run:<br />
<pre><br />
sudo apt-get install graphviz python3-venv librsvg2-bin<br />
</pre><br />
do it. Same goes for:<br />
<pre><br />
pip install -r ./Documentation/sphinx/requirements.txt<br />
</pre><br />
In the case of the later command if you see a message similar to this one:<br />
<pre><br />
WARNING: The script docutils is installed in '/home/fernando/.local/bin' which is not on PATH.<br />
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.<br />
WARNING: The script pybabel is installed in '/home/fernando/.local/bin' which is not on PATH.<br />
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.<br />
WARNING: The scripts sphinx-apidoc, sphinx-autogen, sphinx-build and sphinx-quickstart are installed in '/home/fernando/.local/bin' which is not on PATH.<br />
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.<br />
</pre><br />
Then add that path to your PATH, in this particular case you can do:<br />
<pre><br />
PATH=$PATH:/home/fernando/.local/bin<br />
</pre><br />
Adapt the command according to your case.<br />
<br />
<!--<br />
= Re-build the DTB only =<br />
<br />
Sometimes during development, only the device tree is modified. In order to rebuild only the device tree, you can use the following commands:<br />
<br />
<syntaxhighlight lang=bash><br />
cd $JETPACK/sources/kernel/$KERNEL_DIR<br />
make O=$KERNEL_OUT -j4 dtbs<br />
</syntaxhighlight><br />
--><br />
<br />
<noinclude><br />
{{Xavier/Foot|JetPack 5.0.2/Getting_Started/Components|JetPack 5.0.2/Flashing Board}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48657GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-04-04T16:25:58Z<p>Mmontero: /* Video Test */</p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Pipelines |noerror}}<noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
We have defined a background color for the pattern ball to allow for verification that the final video is in color. As you can see, we have added an nvvidconv element to provide NVMM memory in RGBA format, which enables the internal bin conversion to be performed transforming it to system memory, GRAY format. Additionally, the second nvvidconv element is required for the rrmotionoverlay. The bin is providing NVMM memory, but our overlay element only handles system memory. If you were to create an element that processes the motion bounding boxes in NVMM memory, you would not need this conversion.<br />
<br />
[[File:MotionDetectionPatternBall.jpg|thumbnail|1200px|center|Motion detection in pattern ball test]]<br />
<br />
<br />
Please take a look at the pipeline below, in which the upper half of the image has been defined as the region of interest using the rrmotiondetection element's 'roi' property. You will notice that the bounding box for the ball is only drawn in the selected region, where it is located<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true motion_detector::roi= "<<(float)0,(float)0,(float)1,(float)0.5>>" ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
== Camera Capture ==<br />
<br />
For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true motion_detector::motion=mog2 ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue ! rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false <br />
</syntaxhighlight><br />
<br />
Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue ! nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4<br />
</syntaxhighlight><br />
<br />
== Recorded File ==<br />
<br />
To analyze the motion objects in a recorded file, you can utilize the following pipelines. Use the first pipeline for color format, and the second for grayscale format:<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetColor.jpg|thumbnail|1200px|center|Recorded file motion detection in color]]<br />
<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::algorithm=mog2 noise_reduction::size=3 ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetGray.jpg|thumbnail|1200px|center|Recorded file motion detection in gray format ]]<br />
<br />
<br />
In the grayscale pipeline, you may notice that we've made some changes to the bin's internal element properties. Specifically, we've changed the motion detection algorithm to MOG2 and the noise reduction kernel size to 3. It's important to remember that you can always access and modify the internal element properties to fine-tune and optimize the settings for your specific use case.<br />
<br />
Using a recorded file can make it easier to see the roi (region of interest) property in action. This property enables the motion detection to focus solely on the area of interest. For instance, you can set the motion detection to only detect motion in the right half of the image with a pipeline like the following:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::roi="<<(float)0.5,(float)0,(float)0.5,(float)1>>" ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
<br />
[[File:MotionDetectionGrayRoi.jpg|thumbnail|1200px|center|Motion detection in right half ROI]]<br />
<br />
== Getting the motion signal ==<br />
<br />
As an alternative to get the motion meta directly from the processed buffers, you can connect to the on-new-motion signal from your application to retrieve the information of the bounding boxes. Here is a simple example using gstd to connect to the on-new-motion signal. <br />
<br />
You need to have gstd installed, check that instruction [[here| https://developer.ridgerun.com/wiki/index.php/GStreamer_Daemon_-_Building_GStreamer_Daemon]] if you don’t have it already. Then follow the next steps to create a pipeline and connect to the motion signal:<br />
<br />
* Run gstd as a daemon:<br />
<syntaxhighlight lang=bash><br />
gstd -e <br />
</syntaxhighlight><br />
<br />
* Then get into the gstd-client interactive console:<br />
<syntaxhighlight lang=bash><br />
$ gstd-client <br />
GStreamer Daemon Copyright (C) 2015-2022 Ridgerun, LLC (http://www.ridgerun.com)<br />
This program comes with ABSOLUTELY NO WARRANTY; for details type `warranty'.<br />
This is free software, and you are welcome to redistribute it<br />
under certain conditions; read the license for more details.<br />
gstd> <br />
</syntaxhighlight><br />
<br />
* Create the motion detection pipeline:<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_create test filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin cuda-blob-detection=false ! perf ! fakesink<br />
</syntaxhighlight><br />
* Start the pipeline<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_play test<br />
</syntaxhighlight><br />
* Connect to the signal, the command will wait until motion is detected and provide the motion json description.<br />
<br />
<syntaxhighlight lang=bash><br />
gstd> signal_connect test bin on-new-motion<br />
{<br />
"code" : 0,<br />
"description" : "Success",<br />
"response" : {<br />
"name" : "on-new-motion",<br />
"arguments" : [<br />
{<br />
"type" : "GstRrMotionDetectionBin",<br />
"value" : "(GstRrMotionDetectionBin) bin"<br />
},<br />
{<br />
"type" : "gchararray",<br />
"value" : "{\"ROIs\":[{\"motion\":[{\"x1\":0.13177083432674408,\"x2\":0.17578125,\"y1\":0.7282407283782959,\"y2\":0.80509257316589355}, <br />
{\"x1\":0.62526041269302368,\"x2\":0.6484375,\"y1\":0.62870371341705322,\"y2\":0.75648152828216553}],\"name\":\"roi\",\"x1\":0,\"x2\":1,\"y1\":0,\"y2\":1}]}"<br />
}<br />
]<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Metas&diff=48653GPU Accelerated Motion Detector/GStreamer Plugins/Motion Metas2023-04-04T15:13:37Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Metas |noerror}}<noinclude><br />
<br />
To communicate motion detection regions of interest(ROI) and detected motions, the RidgeRun’s MotionDetection plugin created two Gstreamer metadata structures (GstMeta): GstRrMotionRoiMeta and GstRrMotionMeta. These motion metas are attached to the buffers to communicate the information to the other motion elements or to communicate the final application about the detections.<br />
<br />
== GstRrMotionRoiMeta ==<br />
<br />
GstRrMotionRoiMeta provides extra buffer metadata describing an array of ROIs. Each ROI is a RrMotionRoi structure containing the following information: <br />
<br />
* '''name''': unique name identifier of the ROI<br />
* '''area''': normalized area corresponding to the given ROI<br />
* '''x1''': normalized horizontal position (x) of the ROI left-top corner<br />
* '''x2''': normalized horizontal position (x) of the ROI right-bottom corner<br />
* '''y1''': normalized vertical position (y) of the ROI left-top corner <br />
* '''y2''': normalized vertical position (y) of the ROI right-bottom corner.<br />
<br />
<br />
This meta is mostly used to specify the ROI to use to the downstream motion elements. The GstRrMotionDetection element adds to each buffer the GstRrMotionRoiMeta with the ROI configuration selected through its roi property. The rest of the motion elements read this meta and process only the area specified, to avoid processing image regions that are not required. <br />
<br />
On the other hand, you must know this meta is used for the subsample mechanism to process only the buffer containing the RrMotionRoiMeta. By default the motion elements will only process buffers containing this meta, ignoring the buffers that don't have it, effectively subsampling the buffers processing.<br />
<br />
<br />
== GstRrMotionMeta ==<br />
<br />
GstRrMotionMeta provides buffer metadata describing the motion regions within each selected ROI. The GstRrMotionMeta consists in a JSON describing each ROI and the bounding boxes of the detected motion regions per ROI. The JSON contains an array of ROIs with the following structure:<br />
<br />
* '''name''': unique name identifier of the ROI<br />
* '''x1''': normalized horizontal position (x) of the ROI left-top corner<br />
* '''x2''': normalized horizontal position (x) of the ROI right-bottom corner<br />
* '''y1''': normalized vertical position (y) of the ROI left-top corner <br />
* '''y2''': normalized vertical position (y) of the ROI right-bottom corner.<br />
* '''motion''': an array of motion bounding boxes describing the regions where motion was detected.<br />
<br />
Furthermore, the motion structure for the motion array is defined as follows:<br />
<br />
* '''x1''': normalized horizontal position (x) of the motion detected bounding box left-top corner<br />
* '''x2''': normalized horizontal position (x) of the motion detected bounding box right-bottom corner<br />
* '''y1''': normalized vertical position (y) of the motion detected bounding box left-top corner<br />
* '''y2''': normalized vertical position (y) of the motion detected bounding box right-bottom corner<br />
<br />
Following is an example of how a JSON of the GstRrMotionMeta looks like:<br />
<br />
<syntaxhighlight lang="JSON"><br />
{<br />
<br />
"ROIs":[<br />
<br />
{<br />
<br />
"motion":[<br />
<br />
{<br />
<br />
"x1":0.13177083432674408,<br />
<br />
"x2":0.17578125,<br />
<br />
"y1":0.7282407283782959,<br />
<br />
"y2":0.80509257316589355<br />
<br />
},<br />
<br />
{<br />
<br />
"x1":0.62526041269302368,<br />
<br />
"x2":0.6484375,<br />
<br />
"y1":0.62870371341705322,<br />
<br />
"y2":0.75648152828216553<br />
<br />
}<br />
<br />
],<br />
<br />
"name":"roi",<br />
<br />
"x1":0,<br />
<br />
"x2":1,<br />
<br />
"y1":0,<br />
<br />
"y2":1<br />
<br />
}<br />
<br />
]<br />
<br />
}<br />
</syntaxhighlight><br />
<br />
GstRrMotionMeta is used by the blob detectors to inform about the motion regions. The blob detectors generate the JSON and attach this meta to the buffers so downstream elements can get the detections. Currently, only the rrmotionoverlay uses the GstRrMotion Meta reading it from the buffer and drawing the motions indicated in the JSON. However, any element or application can get this meta from the buffer to know the areas with motion in the corresponding image. <br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Metas&diff=48640GPU Accelerated Motion Detector/GStreamer Plugins/Motion Metas2023-04-03T12:41:20Z<p>Mmontero: Created page with "<noinclude> {{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}} </noinclude> {{DISPLAYTITLE: GStreamer Motion Me..."</p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Metas |noerror}}<noinclude><br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48639GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-04-03T12:40:39Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Pipelines |noerror}}<noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
We have defined a background color for the pattern ball to allow for verification that the final video is in color. As you can see, we have added an nvvidconv element to provide NVMM memory in RGBA format, which enables the internal bin conversion to be performed transforming it to system memory, GRAY format. Additionally, the second nvvidconv element is required for the rrmotionoverlay. The bin is providing NVMM memory, but our overlay element only handles system memory. If you were to create an element that processes the motion bounding boxes in NVMM memory, you would not need this conversion.<br />
<br />
[[File:MotionDetectionPatternBall.jpg|thumbnail|1200px|center|Motion detection in pattern ball test]]<br />
<br />
<br />
Please take a look at the pipeline below, in which the upper half of the image has been defined as the region of interest using the rrmotiondetection element's 'roi' property. You will notice that the bounding box for the ball is only drawn in the selected region, where it is located<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true motion_detector::roi= <<(float)0,(float)0,(float)1,(float)0.5>> ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
== Camera Capture ==<br />
<br />
For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true motion_detector::motion=mog2 ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue ! rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false <br />
</syntaxhighlight><br />
<br />
Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue ! nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4<br />
</syntaxhighlight><br />
<br />
== Recorded File ==<br />
<br />
To analyze the motion objects in a recorded file, you can utilize the following pipelines. Use the first pipeline for color format, and the second for grayscale format:<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetColor.jpg|thumbnail|1200px|center|Recorded file motion detection in color]]<br />
<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::algorithm=mog2 noise_reduction::size=3 ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetGray.jpg|thumbnail|1200px|center|Recorded file motion detection in gray format ]]<br />
<br />
<br />
In the grayscale pipeline, you may notice that we've made some changes to the bin's internal element properties. Specifically, we've changed the motion detection algorithm to MOG2 and the noise reduction kernel size to 3. It's important to remember that you can always access and modify the internal element properties to fine-tune and optimize the settings for your specific use case.<br />
<br />
Using a recorded file can make it easier to see the roi (region of interest) property in action. This property enables the motion detection to focus solely on the area of interest. For instance, you can set the motion detection to only detect motion in the right half of the image with a pipeline like the following:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::roi="<<(float)0.5,(float)0,(float)0.5,(float)1>>" ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
<br />
[[File:MotionDetectionGrayRoi.jpg|thumbnail|1200px|center|Motion detection in right half ROI]]<br />
<br />
== Getting the motion signal ==<br />
<br />
As an alternative to get the motion meta directly from the processed buffers, you can connect to the on-new-motion signal from your application to retrieve the information of the bounding boxes. Here is a simple example using gstd to connect to the on-new-motion signal. <br />
<br />
You need to have gstd installed, check that instruction [[here| https://developer.ridgerun.com/wiki/index.php/GStreamer_Daemon_-_Building_GStreamer_Daemon]] if you don’t have it already. Then follow the next steps to create a pipeline and connect to the motion signal:<br />
<br />
* Run gstd as a daemon:<br />
<syntaxhighlight lang=bash><br />
gstd -e <br />
</syntaxhighlight><br />
<br />
* Then get into the gstd-client interactive console:<br />
<syntaxhighlight lang=bash><br />
$ gstd-client <br />
GStreamer Daemon Copyright (C) 2015-2022 Ridgerun, LLC (http://www.ridgerun.com)<br />
This program comes with ABSOLUTELY NO WARRANTY; for details type `warranty'.<br />
This is free software, and you are welcome to redistribute it<br />
under certain conditions; read the license for more details.<br />
gstd> <br />
</syntaxhighlight><br />
<br />
* Create the motion detection pipeline:<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_create test filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin cuda-blob-detection=false ! perf ! fakesink<br />
</syntaxhighlight><br />
* Start the pipeline<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_play test<br />
</syntaxhighlight><br />
* Connect to the signal, the command will wait until motion is detected and provide the motion json description.<br />
<br />
<syntaxhighlight lang=bash><br />
gstd> signal_connect test bin on-new-motion<br />
{<br />
"code" : 0,<br />
"description" : "Success",<br />
"response" : {<br />
"name" : "on-new-motion",<br />
"arguments" : [<br />
{<br />
"type" : "GstRrMotionDetectionBin",<br />
"value" : "(GstRrMotionDetectionBin) bin"<br />
},<br />
{<br />
"type" : "gchararray",<br />
"value" : "{\"ROIs\":[{\"motion\":[{\"x1\":0.13177083432674408,\"x2\":0.17578125,\"y1\":0.7282407283782959,\"y2\":0.80509257316589355}, <br />
{\"x1\":0.62526041269302368,\"x2\":0.6484375,\"y1\":0.62870371341705322,\"y2\":0.75648152828216553}],\"name\":\"roi\",\"x1\":0,\"x2\":1,\"y1\":0,\"y2\":1}]}"<br />
}<br />
]<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GPU_Accelerated_Motion_Detector/TOC&diff=48638Template:GPU Accelerated Motion Detector/TOC2023-04-03T12:38:40Z<p>Mmontero: </p>
<hr />
<div>{{Sidebar<br />
| title = [[GPU Accelerated Motion Detector|<span style="color:#00008B; font-size:150%;"><u>GPU Accelerated </br>Motion Detector</u></span>]]<br />
| image = <center>{{Template:RidgeRunlogo}}</center><br />
| headingstyle = border-top: 2px solid; border-top-color: gray; font-size:larger; background-color:#63a3ff;<br />
| contentstyle = <br />
| contentclass = hlist<br />
| heading1 = [[GPU Accelerated Motion Detector/Overview|Overview]]<br />
| content1 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content1 = <br />
*[[GPU Accelerated Motion Detector/Overview/Architecture| Architecture]]<br />
*[[GPU Accelerated Motion Detector/Overview/Supported Platforms| Supported Platforms]]<br />
*[[GPU Accelerated Motion Detector/Overview/Capabilities| Capabilities]]<br />
}}<br />
<br />
| heading2 = [[GPU Accelerated Motion Detector/GStreamer Plugins|GStreamer Plugin]]<br />
| content2 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content2 =<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection| Motion Detection]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Noise Reduction | Noise Reduction]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/CPU Blob Detector | CPU Blob Detector]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/GPU Blob Detector | GPU Blob Detector ]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Overlay | Motion Overlay]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin | Motion Detection Bin]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Metas | Motion Metas]]<br />
}}<br />
<br />
| heading3 = [[GPU Accelerated Motion Detector/Getting Started|Getting Started]]<br />
| content3 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content3 =<br />
*[[GPU Accelerated Motion Detector/Getting Started/Dependencies| Dependencies]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Evaluating GPUMotionDetector | Evaluating <br />
GPUMotionDetector]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Building GPUMotionDetector|Building <br />
GPUMotionDetector]]<br />
}}<br />
<br />
| heading4 = [[GPU Accelerated Motion Detector/Examples|Examples]]<br />
| content4 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content4 = <br />
*[[GPU Accelerated Motion Detector/Examples/GStreamer Pipelines | GStreamer Pipelines]]<br />
}}<br />
<br />
| heading5 = [[GPU Accelerated Motion Detector/Performance|Performance]]<br />
| content5 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content5 = <br />
*[[GPU Accelerated Motion Detector/Performance/Xavier NX | Jetson Xavier NX]]<br />
}}<br />
| heading6 = [[GPU Accelerated Motion Detector/Troubleshooting | Troubleshooting]]<br />
<br />
| heading7 = [[GPU Accelerated Motion Detector/FAQ |FAQ]]<br />
<br />
| heading8 = [[GPU Accelerated Motion Detector/Contact_Us|<span style="color:#00008B;font-size:larger;">'''<u>Contact Us</u>'''</span>]]<br />
}}<br />
<br />
<noinclude><br />
[[Category:GPU Accelerated Motion Detector Templates]]<br />
[[Category:TOCs using Sidebar]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Detection_Bin&diff=48637GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin2023-04-03T12:22:23Z<p>Mmontero: /* Access Child Properties */</p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=GStreamer Plugins/Motion Overlay|next=Getting Started|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Element|noerror}}<noinclude><br />
<br />
The GStreamer element is called "rrmotiondetectionbin". This element performs the motion detection, noise reduction and blob detection stages in one simple element/bin. It is very useful if you don't want to separate the different stages described in the [[GPU Accelerated Motion Detector/Overview| Overview]] section. <br />
<br />
<br />
=== Main Properties ===<br />
<br />
* '''cuda-blob-detection''': Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda (Blob Detector GPU) otherwise use cpu blob detection. By default it is false.<br />
<br />
* '''grayscale''': When enabled, the motion detection is performed over a grayscale image (color dropped). By default it is true.<br />
<br />
<br />
=== Access Child Properties ===<br />
<br />
Through the bin you are capable to access and control its internal elements configuration using the following notation:<br />
<br />
<pre><br />
<element name>::<property name><br />
</pre><br />
<br />
The element name corresponds to the name given to the element internally by the bin. For your convenience here is the list of the rrmotiondetectionbin main internal element's names:<br />
<br />
* '''motion_detector''' of type rrmotiondetection<br />
* '''motion_denoiser''' of type rrnoisereduction<br />
* '''blob_detector''' that can be of type rrblobdetector or rrcudablobdetector depending of the cuda-blob-detection property.<br />
<br />
For instance if you want to change the motion detector algorithm you can set the property motion as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_detector::motion=mog2<br />
</pre><br />
<br />
On the hand, if you can to change the kernel size of the motion reduction element you can do it as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_denoiser::size=5<br />
</pre><br />
<br />
Please refer to the [[GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines | GStreamer Pipelines section]] for more examples of child properties in use<br />
<br />
=== Capabilities ===<br />
<br />
<br />
This section overviews the Motion Detection Bin input and output capabilities.<br />
<br />
Below is the full output of the '''gst-inspect-1.0''' command:<br />
<br />
<pre><br />
<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Motion Detection Bin<br />
Klass Filter/Video<br />
Description Detects motion blobs<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name rrmotiondetection<br />
Description Motion detection components based on OpenCV<br />
Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstrrmotiondetection.so<br />
Version 0.1.0<br />
License Proprietary<br />
Source module gstrrmotiondetection<br />
Binary package gstrrmotiondetection<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstRrMotionDetectionBin<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
SRC template: 'src'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
Element has no clocking capabilities.<br />
Element has no URI handling capabilities.<br />
<br />
Pads:<br />
SINK: 'sink'<br />
SRC: 'src'<br />
Pad Template: 'src'<br />
<br />
Element Properties:<br />
async-handling : The bin will handle Asynchronous state changes<br />
flags: readable, writable<br />
Boolean. Default: false<br />
cuda-blob-detection : Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda otherwise use cpu blob detection<br />
flags: readable, writable<br />
Boolean. Default: false<br />
grayscale : When enabled, the motion detection is performed over a grayscaleimage (color dropped)<br />
flags: readable, writable<br />
Boolean. Default: true<br />
message-forward : Forwards all children messages<br />
flags: readable, writable<br />
Boolean. Default: false<br />
name : The name of the object<br />
flags: readable, writable<br />
String. Default: "rrmotiondetectionbin0"<br />
parent : The parent of the object<br />
flags: readable, writable<br />
Object of type "GstObject"<br />
<br />
</pre></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Detection_Bin&diff=48623GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin2023-03-31T20:47:41Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=GStreamer Plugins/Motion Overlay|next=Getting Started|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Element|noerror}}<noinclude><br />
<br />
The GStreamer element is called "rrmotiondetectionbin". This element performs the motion detection, noise reduction and blob detection stages in one simple element/bin. It is very useful if you don't want to separate the different stages described in the [[GPU Accelerated Motion Detector/Overview| Overview]] section. <br />
<br />
<br />
=== Main Properties ===<br />
<br />
* '''cuda-blob-detection''': Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda (Blob Detector GPU) otherwise use cpu blob detection. By default it is false.<br />
<br />
* '''grayscale''': When enabled, the motion detection is performed over a grayscale image (color dropped). By default it is true.<br />
<br />
<br />
=== Access Child Properties ===<br />
<br />
Through the bin you are capable to access and control its internal elements configuration using the following notation:<br />
<br />
<pre><br />
<element name>::<property name><br />
</pre><br />
<br />
The element name corresponds to the name given to the element internally by the bin. For your convenience here is the list of the rrmotiondetectionbin main internal element's names:<br />
<br />
* '''motion_detector''' of type rrmotiondetection<br />
* '''motion_denoiser''' of type rrnoisereduction<br />
* '''blob_detector''' that can be of type rrblobdetector or rrcudablobdetector depending of the cuda-blob-detection property.<br />
<br />
For instance if you want to change the motion detector algorithm you can set the property motion as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_detector::motion=mog2<br />
</pre><br />
<br />
On the hand, if you can to change the kernel size of the motion reduction element you can do it as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_denoiser::size=5<br />
</pre><br />
<br />
<br />
=== Capabilities ===<br />
<br />
<br />
This section overviews the Motion Detection Bin input and output capabilities.<br />
<br />
Below is the full output of the '''gst-inspect-1.0''' command:<br />
<br />
<pre><br />
<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Motion Detection Bin<br />
Klass Filter/Video<br />
Description Detects motion blobs<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name rrmotiondetection<br />
Description Motion detection components based on OpenCV<br />
Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstrrmotiondetection.so<br />
Version 0.1.0<br />
License Proprietary<br />
Source module gstrrmotiondetection<br />
Binary package gstrrmotiondetection<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstRrMotionDetectionBin<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
SRC template: 'src'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
Element has no clocking capabilities.<br />
Element has no URI handling capabilities.<br />
<br />
Pads:<br />
SINK: 'sink'<br />
SRC: 'src'<br />
Pad Template: 'src'<br />
<br />
Element Properties:<br />
async-handling : The bin will handle Asynchronous state changes<br />
flags: readable, writable<br />
Boolean. Default: false<br />
cuda-blob-detection : Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda otherwise use cpu blob detection<br />
flags: readable, writable<br />
Boolean. Default: false<br />
grayscale : When enabled, the motion detection is performed over a grayscaleimage (color dropped)<br />
flags: readable, writable<br />
Boolean. Default: true<br />
message-forward : Forwards all children messages<br />
flags: readable, writable<br />
Boolean. Default: false<br />
name : The name of the object<br />
flags: readable, writable<br />
String. Default: "rrmotiondetectionbin0"<br />
parent : The parent of the object<br />
flags: readable, writable<br />
Object of type "GstObject"<br />
<br />
</pre></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Detection_Bin&diff=48622GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin2023-03-31T20:46:08Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=GStreamer Plugins/Motion Overlay|next=Getting Started|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Element|noerror}}<noinclude><br />
<br />
The GStreamer element is called "rrmotiondetectionbin". This element performs the motion detection, noise reduction and blob detection stages in one simple element/bin. It is very useful if you don't want to separate the different stages described in the [[GPU Accelerated Motion Detector/Overview| Overview]] section. <br />
<br />
<br />
=== Main Properties ===<br />
<br />
* '''cuda-blob-detection''': Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda (Blob Detector GPU) otherwise use cpu blob detection. By default it is false.<br />
<br />
* '''grayscale''': When enabled, the motion detection is performed over a grayscale image (color dropped). By default it is true.<br />
<br />
<br />
=== Access Child Properties ===<br />
<br />
Through the bin you are capable to access and control its internal elements configuration using the following notation:<br />
<br />
<pre><br />
<element name>::<property name><br />
</pre><br />
<br />
The element name corresponds to the name given to the element internally by the bin. For your convenience here is the list of the rrmotiondetectionbin main internal element's names:<br />
<br />
* motion_detector of type rrmotiondetection<br />
* motion_denoiser of type rrnoisereduction<br />
* blob_detector that can be of type rrblobdetector or rrcudablobdetector depending of the cuda-blob-detection property.<br />
<br />
For instance if you want to change the motion detector algorithm you can set the property motion as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_detector::motion=mog2<br />
</pre><br />
<br />
On the hand, if you can to change the kernel size of the motion reduction element you can do it as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_denoiser::size=5<br />
</pre><br />
<br />
<br />
=== Capabilities ===<br />
<br />
<br />
This section overviews the Motion Detection Bin input and output capabilities.<br />
<br />
Below is the full output of the '''gst-inspect-1.0''' command:<br />
<br />
<pre><br />
<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Motion Detection Bin<br />
Klass Filter/Video<br />
Description Detects motion blobs<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name rrmotiondetection<br />
Description Motion detection components based on OpenCV<br />
Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstrrmotiondetection.so<br />
Version 0.1.0<br />
License Proprietary<br />
Source module gstrrmotiondetection<br />
Binary package gstrrmotiondetection<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstRrMotionDetectionBin<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
SRC template: 'src'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
Element has no clocking capabilities.<br />
Element has no URI handling capabilities.<br />
<br />
Pads:<br />
SINK: 'sink'<br />
SRC: 'src'<br />
Pad Template: 'src'<br />
<br />
Element Properties:<br />
async-handling : The bin will handle Asynchronous state changes<br />
flags: readable, writable<br />
Boolean. Default: false<br />
cuda-blob-detection : Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda otherwise use cpu blob detection<br />
flags: readable, writable<br />
Boolean. Default: false<br />
grayscale : When enabled, the motion detection is performed over a grayscaleimage (color dropped)<br />
flags: readable, writable<br />
Boolean. Default: true<br />
message-forward : Forwards all children messages<br />
flags: readable, writable<br />
Boolean. Default: false<br />
name : The name of the object<br />
flags: readable, writable<br />
String. Default: "rrmotiondetectionbin0"<br />
parent : The parent of the object<br />
flags: readable, writable<br />
Object of type "GstObject"<br />
<br />
</pre></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/GStreamer_Plugins/Motion_Detection_Bin&diff=48621GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin2023-03-31T20:45:49Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=GStreamer Plugins/Motion Overlay|next=Getting Started|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Element|noerror}}<noinclude><br />
<br />
The GStreamer element is called "rrmotiondetectionbin". This element performs the motion detection, noise reduction and blob detection stages in one simple element/bin. It is very useful if you don't want to separate the different stages described in the [[GPU Accelerated Motion Detector/Overview| Overview]] section. <br />
<br />
<br />
=== Main Properties ===<br />
<br />
* '''cuda-blob-detection''': Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda (Blob Detector GPU) otherwise use cpu blob detection. By default it is false.<br />
<br />
* '''grayscale''': When enabled, the motion detection is performed over a grayscale image (color dropped). By default it is true.<br />
<br />
=== Access Child Properties ===<br />
<br />
Through the bin you are capable to access and control its internal elements configuration using the following notation:<br />
<br />
<pre><br />
<element name>::<property name><br />
</pre><br />
<br />
The element name corresponds to the name given to the element internally by the bin. For your convenience here is the list of the rrmotiondetectionbin main internal element's names:<br />
<br />
* motion_detector of type rrmotiondetection<br />
* motion_denoiser of type rrnoisereduction<br />
* blob_detector that can be of type rrblobdetector or rrcudablobdetector depending of the cuda-blob-detection property.<br />
<br />
For instance if you want to change the motion detector algorithm you can set the property motion as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_detector::motion=mog2<br />
</pre><br />
<br />
On the hand, if you can to change the kernel size of the motion reduction element you can do it as follows:<br />
<br />
<pre><br />
rrmotiondetectionbin motion_denoiser::size=5<br />
</pre><br />
<br />
<br />
=== Capabilities ===<br />
<br />
<br />
This section overviews the Motion Detection Bin input and output capabilities.<br />
<br />
Below is the full output of the '''gst-inspect-1.0''' command:<br />
<br />
<pre><br />
<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Motion Detection Bin<br />
Klass Filter/Video<br />
Description Detects motion blobs<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name rrmotiondetection<br />
Description Motion detection components based on OpenCV<br />
Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstrrmotiondetection.so<br />
Version 0.1.0<br />
License Proprietary<br />
Source module gstrrmotiondetection<br />
Binary package gstrrmotiondetection<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstRrMotionDetectionBin<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
SRC template: 'src'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: { (string)I420, (string)YV12, (string)YUY2, (string)UYVY, (string)AYUV, (string)VUYA, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGB, (string)BGR, (string)Y41B, (string)Y42B, (string)YVYU, (string)Y444, (string)v210, (string)v216, (string)Y210, (string)Y410, (string)NV12, (string)NV21, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE, (string)v308, (string)RGB16, (string)BGR16, (string)RGB15, (string)BGR15, (string)UYVP, (string)A420, (string)RGB8P, (string)YUV9, (string)YVU9, (string)IYU1, (string)ARGB64, (string)AYUV64, (string)r210, (string)I420_10BE, (string)I420_10LE, (string)I422_10BE, (string)I422_10LE, (string)Y444_10BE, (string)Y444_10LE, (string)GBR, (string)GBR_10BE, (string)GBR_10LE, (string)NV16, (string)NV24, (string)NV12_64Z32, (string)A420_10BE, (string)A420_10LE, (string)A422_10BE, (string)A422_10LE, (string)A444_10BE, (string)A444_10LE, (string)NV61, (string)P010_10BE, (string)P010_10LE, (string)IYU2, (string)VYUY, (string)GBRA, (string)GBRA_10BE, (string)GBRA_10LE, (string)BGR10A2_LE, (string)GBR_12BE, (string)GBR_12LE, (string)GBRA_12BE, (string)GBRA_12LE, (string)I420_12BE, (string)I420_12LE, (string)I422_12BE, (string)I422_12LE, (string)Y444_12BE, (string)Y444_12LE, (string)GRAY10_LE32, (string)NV12_10LE32, (string)NV16_10LE32, (string)NV12_10LE40 }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: { (string)I420, (string)I420_10LE, (string)P010_10LE, (string)I420_12LE, (string)UYVY, (string)YUY2, (string)YVYU, (string)NV12, (string)NV16, (string)NV24, (string)GRAY8, (string)BGRx, (string)RGBA, (string)Y42B }<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
<br />
Element has no clocking capabilities.<br />
Element has no URI handling capabilities.<br />
<br />
Pads:<br />
SINK: 'sink'<br />
SRC: 'src'<br />
Pad Template: 'src'<br />
<br />
Element Properties:<br />
async-handling : The bin will handle Asynchronous state changes<br />
flags: readable, writable<br />
Boolean. Default: false<br />
cuda-blob-detection : Use blob detection algorithm (in charge of detecting the motion bounding boxes) implemented in cuda otherwise use cpu blob detection<br />
flags: readable, writable<br />
Boolean. Default: false<br />
grayscale : When enabled, the motion detection is performed over a grayscaleimage (color dropped)<br />
flags: readable, writable<br />
Boolean. Default: true<br />
message-forward : Forwards all children messages<br />
flags: readable, writable<br />
Boolean. Default: false<br />
name : The name of the object<br />
flags: readable, writable<br />
String. Default: "rrmotiondetectionbin0"<br />
parent : The parent of the object<br />
flags: readable, writable<br />
Object of type "GstObject"<br />
<br />
</pre></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48616GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-31T20:02:03Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Performance |noerror}}<noinclude><br />
<br />
This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects [https://github.com/RidgeRun/gst-perf RidgeRun's perf element] and NVIDIA application tegrastats were used. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX and configured in power mode ID 5.The results of these measurements can be found in the following subsections.<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
To further analyze the impact of the denoiser we measured the performance for different kernel sizes, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Sample Frequency: 1<br />
* Processing format: grayscale<br />
<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.906<br />
| 18.076<br />
| 86.577<br />
|-<br />
| 9<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
|11<br />
|29.924<br />
|16.153<br />
|84.462<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.927<br />
| 9.777<br />
| 40.16<br />
|-<br />
| 9<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
|11<br />
|29.925<br />
|10<br />
|37.923<br />
|}<br />
<br />
== Motion Sampling Frequency ==<br />
<br />
This subsections shows the performance changes caused by different sampling frequencies, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Denoise kernel size: 9<br />
* Processing format: grayscale<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_detector::sample-frequency=$FREQ motion_denoiser::size=9 cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| 2<br />
| 30.232<br />
| 11.44<br />
| 56.292<br />
|-<br />
| 4<br />
| 30.448<br />
| 8.72<br />
| 40.739<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=9 motion_detector::sample-frequency=$FREQ ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| 2<br />
| 30.561<br />
| 8.038<br />
| 41.9<br />
|-<br />
| 4<br />
| 30.572<br />
| 7.038<br />
| 22.778<br />
|}<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48471GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-28T16:10:51Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Performance |noerror}}<noinclude><br />
<br />
This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX.<br />
<br />
The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects the RidgeRun element perf and NVIDIA application tegrastats were used. The results of these measurements can be found in the following subsections.<br />
<br />
<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
To further analyze the impact of the denoiser we measured the performance for different kernel sizes, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Sample Frequency: 1<br />
* Processing format: grayscale<br />
<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.906<br />
| 18.076<br />
| 86.577<br />
|-<br />
| 9<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
|11<br />
|29.924<br />
|16.153<br />
|84.462<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.927<br />
| 9.777<br />
| 40.16<br />
|-<br />
| 9<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
|11<br />
|29.925<br />
|10<br />
|37.923<br />
|}<br />
<br />
== Motion Sampling Frequency ==<br />
<br />
This subsections shows the performance changes caused by different sampling frequencies, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Denoise kernel size: 9<br />
* Processing format: grayscale<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_detector::sample-frequency=$FREQ motion_denoiser::size=9 cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| 2<br />
| 30.232<br />
| 11.44<br />
| 56.292<br />
|-<br />
| 4<br />
| 30.448<br />
| 8.72<br />
| 40.739<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=9 motion_detector::sample-frequency=$FREQ ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| 2<br />
| 30.561<br />
| 8.038<br />
| 41.9<br />
|-<br />
| 4<br />
| 30.572<br />
| 7.038<br />
| 22.778<br />
|}<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48470GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-28T16:10:26Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Performance |noerror}}<noinclude><br />
<br />
This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX.<br />
<br />
The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects the RidgeRun element perf and NVIDIA application tegrastats were used. The results of these measurements can be found in the following subsections.<br />
<br />
<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
To further analyze the impact of the denoiser we measured the performance for different kernel sizes, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Sample Frequency: 1<br />
* Processing format: grayscale<br />
<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.906<br />
| 18.076<br />
| 86.577<br />
|-<br />
| 9<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
|11<br />
|29.924<br />
|16.153<br />
|84.462<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.927<br />
| 9.777<br />
| 40.16<br />
|-<br />
| 9<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
|11<br />
|29.925<br />
|10<br />
|37.923<br />
|}<br />
<br />
== Motion Sampling Frequency ==<br />
<br />
This subsections shows the performance changes caused by different sampling frequencies, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Denoise kernel size: 9<br />
* Processing format: grayscale<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_detector::sample-frequency=$FREQ motion_denoiser::size=9 cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| 2<br />
| 30.232<br />
| 11.44<br />
| 56.292<br />
|-<br />
| 4<br />
| 30.448<br />
| 8.72<br />
| 40.739<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=9 motion_detector::sample-frequency=$FREQ ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Sample Frequency<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 1<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| 2<br />
| 30.561<br />
| 8.038<br />
| 41.9<br />
|-<br />
| 4<br />
| 30.572<br />
| 7.038<br />
| 22.778<br />
|}<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48469GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-28T16:01:22Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Performance |noerror}}<noinclude><br />
<br />
This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX.<br />
<br />
The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects the RidgeRun element perf and NVIDIA application tegrastats were used. The results of these measurements can be found in the following subsections.<br />
<br />
<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
To further analyze the impact of the denoiser we measured the performance for different kernel sizes, keeping the rest of configurations fixated as follows:<br />
* Motion detection algorithm: MOG<br />
* Blob detection: CPU<br />
* Sample Frequency: 1<br />
* Processing format: grayscale<br />
<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE cuda-blob-detection=false ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! Denoiser Kernel Size<br />
! Framerate<br />
! CPU<br />
! GPU<br />
|-<br />
| 5<br />
| 29.906<br />
| 18.076<br />
| 86.577<br />
|-<br />
| 9<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
|11<br />
|29.924<br />
|16.153<br />
|84.462<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The following pipeline was used for these measurements:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin cuda-blob-detection=false motion_detector::motion=mog motion_denoiser::size=$KERNEL_SIZE ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
<br />
<br />
== Motion Sampling Frequency ==<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48468GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-03-28T15:46:33Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
We have defined a background color for the pattern ball to allow for verification that the final video is in color. As you can see, we have added an nvvidconv element to provide NVMM memory in RGBA format, which enables the internal bin conversion to be performed transforming it to system memory, GRAY format. Additionally, the second nvvidconv element is required for the rrmotionoverlay. The bin is providing NVMM memory, but our overlay element only handles system memory. If you were to create an element that processes the motion bounding boxes in NVMM memory, you would not need this conversion.<br />
<br />
[[File:MotionDetectionPatternBall.jpg|thumbnail|1200px|center|Motion detection in pattern ball test]]<br />
<br />
<br />
Please take a look at the pipeline below, in which the upper half of the image has been defined as the region of interest using the rrmotiondetection element's 'roi' property. You will notice that the bounding box for the ball is only drawn in the selected region, where it is located<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true motion_detector::roi= <<(float)0,(float)0,(float)1,(float)0.5>> ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
== Camera Capture ==<br />
<br />
For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true motion_detector::motion=mog2 ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue ! rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false <br />
</syntaxhighlight><br />
<br />
Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue ! nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4<br />
</syntaxhighlight><br />
<br />
== Recorded File ==<br />
<br />
To analyze the motion objects in a recorded file, you can utilize the following pipelines. Use the first pipeline for color format, and the second for grayscale format:<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetColor.jpg|thumbnail|1200px|center|Recorded file motion detection in color]]<br />
<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::algorithm=mog2 noise_reduction::size=3 ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
[[File:MotionDetectionStreetGray.jpg|thumbnail|1200px|center|Recorded file motion detection in gray format ]]<br />
<br />
<br />
In the grayscale pipeline, you may notice that we've made some changes to the bin's internal element properties. Specifically, we've changed the motion detection algorithm to MOG2 and the noise reduction kernel size to 3. It's important to remember that you can always access and modify the internal element properties to fine-tune and optimize the settings for your specific use case.<br />
<br />
Using a recorded file can make it easier to see the roi (region of interest) property in action. This property enables the motion detection to focus solely on the area of interest. For instance, you can set the motion detection to only detect motion in the right half of the image with a pipeline like the following:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::roi="<<(float)0.5,(float)0,(float)0.5,(float)1>>" ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
<br />
[[File:MotionDetectionGrayRoi.jpg|thumbnail|1200px|center|Motion detection in right half ROI]]<br />
<br />
== Getting the motion signal ==<br />
<br />
As an alternative to get the motion meta directly from the processed buffers, you can connect to the on-new-motion signal from your application to retrieve the information of the bounding boxes. Here is a simple example using gstd to connect to the on-new-motion signal. <br />
<br />
You need to have gstd installed, check that instruction [[here| https://developer.ridgerun.com/wiki/index.php/GStreamer_Daemon_-_Building_GStreamer_Daemon]] if you don’t have it already. Then follow the next steps to create a pipeline and connect to the motion signal:<br />
<br />
* Run gstd as a daemon:<br />
<syntaxhighlight lang=bash><br />
gstd -e <br />
</syntaxhighlight><br />
<br />
* Then get into the gstd-client interactive console:<br />
<syntaxhighlight lang=bash><br />
$ gstd-client <br />
GStreamer Daemon Copyright (C) 2015-2022 Ridgerun, LLC (http://www.ridgerun.com)<br />
This program comes with ABSOLUTELY NO WARRANTY; for details type `warranty'.<br />
This is free software, and you are welcome to redistribute it<br />
under certain conditions; read the license for more details.<br />
gstd> <br />
</syntaxhighlight><br />
<br />
* Create the motion detection pipeline:<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_create test filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin cuda-blob-detection=false ! perf ! fakesink<br />
</syntaxhighlight><br />
* Start the pipeline<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_play test<br />
</syntaxhighlight><br />
* Connect to the signal, the command will wait until motion is detected and provide the motion json description.<br />
<br />
<syntaxhighlight lang=bash><br />
gstd> signal_connect test bin on-new-motion<br />
{<br />
"code" : 0,<br />
"description" : "Success",<br />
"response" : {<br />
"name" : "on-new-motion",<br />
"arguments" : [<br />
{<br />
"type" : "GstRrMotionDetectionBin",<br />
"value" : "(GstRrMotionDetectionBin) bin"<br />
},<br />
{<br />
"type" : "gchararray",<br />
"value" : "{\"ROIs\":[{\"motion\":[{\"x1\":0.13177083432674408,\"x2\":0.17578125,\"y1\":0.7282407283782959,\"y2\":0.80509257316589355}, <br />
{\"x1\":0.62526041269302368,\"x2\":0.6484375,\"y1\":0.62870371341705322,\"y2\":0.75648152828216553}],\"name\":\"roi\",\"x1\":0,\"x2\":1,\"y1\":0,\"y2\":1}]}"<br />
}<br />
]<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:MotionDetectionGrayRoi.jpg&diff=48467File:MotionDetectionGrayRoi.jpg2023-03-28T15:42:38Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:MotionDetectionStreetColor.jpg&diff=48466File:MotionDetectionStreetColor.jpg2023-03-28T15:40:51Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:MotionDetectionStreetGray.jpg&diff=48465File:MotionDetectionStreetGray.jpg2023-03-28T15:39:18Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48464GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-03-28T15:37:03Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
We have defined a background color for the pattern ball to allow for verification that the final video is in color. As you can see, we have added an nvvidconv element to provide NVMM memory in RGBA format, which enables the internal bin conversion to be performed transforming it to system memory, GRAY format. Additionally, the second nvvidconv element is required for the rrmotionoverlay. The bin is providing NVMM memory, but our overlay element only handles system memory. If you were to create an element that processes the motion bounding boxes in NVMM memory, you would not need this conversion.<br />
<br />
[[File:MotionDetectionPatternBall.jpg|thumbnail|center|Motion detection in pattern ball test]]<br />
<br />
<br />
Please take a look at the pipeline below, in which the upper half of the image has been defined as the region of interest using the rrmotiondetection element's 'roi' property. You will notice that the bounding box for the ball is only drawn in the selected region, where it is located<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true motion_detector::roi= <<(float)0,(float)0,(float)1,(float)0.5>> ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
== Camera Capture ==<br />
<br />
For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true motion_detector::motion=mog2 ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue ! rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false <br />
</syntaxhighlight><br />
<br />
Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue ! nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4<br />
</syntaxhighlight><br />
<br />
<br />
== Recorded File ==<br />
<br />
To analyze the motion objects in a recorded file, you can utilize the following pipelines. Use the first pipeline for color format, and the second for grayscale format:<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
<br />
<syntaxhighlight lang=bash><br />
export FILE=<path to file><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::algorithm=mog2 noise_reduction::size=3 ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
In the grayscale pipeline, you may notice that we've made some changes to the bin's internal element properties. Specifically, we've changed the motion detection algorithm to MOG2 and the noise reduction kernel size to 3. It's important to remember that you can always access and modify the internal element properties to fine-tune and optimize the settings for your specific use case.<br />
<br />
Using a recorded file can make it easier to see the roi (region of interest) property in action. This property enables the motion detection to focus solely on the area of interest. For instance, you can set the motion detection to only detect motion in the right half of the image with a pipeline like the following:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 filesrc location=$FILE ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=GRAY8 ! queue ! rrmotiondetectionbin name=bin motion_detector::roi="<<(float)0.5,(float)0,(float)0.5,(float)1>>" ! queue ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false<br />
</syntaxhighlight><br />
<br />
<br />
== Getting the motion signal ==<br />
<br />
As an alternative to get the motion meta directly from the processed buffers, you can connect to the on-new-motion signal from your application to retrieve the information of the bounding boxes. Here is a simple example using gstd to connect to the on-new-motion signal. <br />
<br />
You need to have gstd installed, check that instruction [[here| https://developer.ridgerun.com/wiki/index.php/GStreamer_Daemon_-_Building_GStreamer_Daemon]] if you don’t have it already. Then follow the next steps to create a pipeline and connect to the motion signal:<br />
<br />
* Run gstd as a daemon:<br />
<syntaxhighlight lang=bash><br />
gstd -e <br />
</syntaxhighlight><br />
<br />
* Then get into the gstd-client interactive console:<br />
<syntaxhighlight lang=bash><br />
$ gstd-client <br />
GStreamer Daemon Copyright (C) 2015-2022 Ridgerun, LLC (http://www.ridgerun.com)<br />
This program comes with ABSOLUTELY NO WARRANTY; for details type `warranty'.<br />
This is free software, and you are welcome to redistribute it<br />
under certain conditions; read the license for more details.<br />
gstd> <br />
</syntaxhighlight><br />
<br />
* Create the motion detection pipeline:<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_create test filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin name=bin cuda-blob-detection=false ! perf ! fakesink<br />
</syntaxhighlight><br />
* Start the pipeline<br />
<syntaxhighlight lang=bash><br />
gstd> pipeline_play test<br />
</syntaxhighlight><br />
* Connect to the signal, the command will wait until motion is detected and provide the motion json description.<br />
<br />
<syntaxhighlight lang=bash><br />
gstd> signal_connect test bin on-new-motion<br />
{<br />
"code" : 0,<br />
"description" : "Success",<br />
"response" : {<br />
"name" : "on-new-motion",<br />
"arguments" : [<br />
{<br />
"type" : "GstRrMotionDetectionBin",<br />
"value" : "(GstRrMotionDetectionBin) bin"<br />
},<br />
{<br />
"type" : "gchararray",<br />
"value" : "{\"ROIs\":[{\"motion\":[{\"x1\":0.13177083432674408,\"x2\":0.17578125,\"y1\":0.7282407283782959,\"y2\":0.80509257316589355}, <br />
{\"x1\":0.62526041269302368,\"x2\":0.6484375,\"y1\":0.62870371341705322,\"y2\":0.75648152828216553}],\"name\":\"roi\",\"x1\":0,\"x2\":1,\"y1\":0,\"y2\":1}]}"<br />
}<br />
]<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48458GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-03-28T15:20:05Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections. During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
<br />
== Camera Capture ==<br />
<br />
For a more realistic example you can capture the video feed to the motion bin from a camera using the following pipeline:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true motion_detector::motion=mog2 ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
Remember that you can choose to use the blob detection in CPU or GPU with bin cuda-blob-detection property. By default the bin uses the CPU element but you can set cuda-blob-detection to true to use the GPU element. However, keep in mind that using the CUDA version may reduce your frame rate, especially for larger resolutions. Therefore, we recommend using the CPU version, which may consume slightly more CPU but provides a better job at maintaining your real-time frame rate.<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3840,height=2160' ! queue ! rrmotiondetectionbin grayscale=true cuda-blob-detection=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false -v<br />
</syntaxhighlight><br />
<br />
To conserve processing resources, it's possible to downscale the video for motion detection. The bounding box values are normalized, so you can use them on the original size video without any issues. Check out the pipeline below for an example:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920,height=1080' ! queue ! rrmotiondetectionbin grayscale=true ! queue ! nvvidconv ! rrmotionoverlay thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink sync=false <br />
</syntaxhighlight><br />
<br />
Rather than displaying the results, it's possible to use appsink to obtain the buffer along with the corresponding motion metadata in your application. Alternatively, you can create a custom element that retrieves the metadata and processes it based on your specific requirements. Here's an example of how to draw motion bounding boxes and record to a file:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 -e nvarguscamerasrc ! "video/x-raw(memory:NVMM),width=3840,height=2160" ! queue ! rrmotiondetectionbin name=bin ! queue ! nvvidconv ! video/x-raw,format=RGBA ! rrmotionoverlay color=0xffff0000 thickness=10 ! nvvidconv ! queue ! nvv4l2h264enc ! h264parse ! queue ! qtmux ! filesink location=test.mp4<br />
</syntaxhighlight><br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:MotionDetectionPatternBall.jpg&diff=48388File:MotionDetectionPatternBall.jpg2023-03-27T16:54:26Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Examples/GStreamer_Pipelines&diff=48387GPU Accelerated Motion Detector/Examples/GStreamer Pipelines2023-03-27T16:44:21Z<p>Mmontero: Created page with "<noinclude> {{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}} </noinclude> This section provides examples of p..."</p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
This section provides examples of pipelines that demonstrate how to use the RidgeRun motion detection solution. The pipelines primarily utilize the rrmotiondetectionbin element, which handles the entire motion detection process, and the rrmotionoverlay element, which provides a visual representation of the detections.<br />
<br />
<br />
During the examples some pipeline restrictions will be presented, caused by the NVIDIA conversion element and gst-cuda itself. <br />
<br />
<br />
== Video Test ==<br />
<br />
The following examples showcase basic pipelines that use videotestsrc to display the ball pattern, which has a clear movement flow.<br />
<br />
The first pipeline is a simple example that detects motion in the ball pattern and draws a bounding box around the area of detection:<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! rrmotiondetectionbin ! rrmotionoverlay thickness=2 ! queue ! nvvidconv ! “video/x-raw(memory:NVMM),format=I420” ! nv3dsink<br />
</syntaxhighlight><br />
<br />
Please note that we have not specified any input format restrictions, which means that the pattern will be processed in grayscale. To display the pattern, we have converted it to the I420 format, which we know is compatible with nv3dsink, this element has some format restrictions, and the negotiation process may not work as expected for other formats.<br />
<br />
<br />
If you would like to receive the video in the RGBA format, there are some restrictions that you should be aware of. You can use a pipeline similar to the one below, which utilizes the system memory RGBA format. However, you must set the 'grayscale' property to false in order to perform internal motion processing in RGBA format. If you leave this property as true, you may encounter a negotiation error. This is because the internal conversion element, nvvidconv, cannot convert from system memory to system memory. Additionally, the base classes of the motion elements provided by gst-cuda do not support NVMM memory with gray format.<br />
<br />
<br />
It is worth noting that we have set the rrmotionoverlay color property to define the bounding box color, which can be chosen based on your preference. <br />
<br />
<syntaxhighlight lang=bash><br />
# Use red line color for bounding boxes<br />
export COLOR=0xffff0000<br />
<br />
# Use line below to change color to green<br />
# export COLOR=0xff00ff00<br />
<br />
# Use line below to change color to blue<br />
# export COLOR=0xff0000ff<br />
<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball ! video/x-raw,format=RGBA ! rrmotiondetectionbin grayscale=false ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
<br />
However, if you would like to use grayscale for motion processing while still receiving the video in a color format, it is possible to do so, there is no secret that gray video processing uses fewer resources. To achieve this, you will need to include two additional conversions in the pipeline, as shown below:<br />
<br />
<syntaxhighlight lang=bash><br />
export COLOR=0xff0000ff<br />
gst-launch-1.0 videotestsrc is-live=true pattern=ball background-color=0xffaaaa00 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA" ! rrmotiondetectionbin grayscale=true ! nvvidconv ! rrmotionoverlay color=$COLOR thickness=2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=I420 ! queue ! nv3dsink<br />
</syntaxhighlight><br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GPU_Accelerated_Motion_Detector/TOC&diff=48386Template:GPU Accelerated Motion Detector/TOC2023-03-27T16:17:01Z<p>Mmontero: </p>
<hr />
<div>{{Sidebar<br />
| title = [[GPU Accelerated Motion Detector|<span style="color:#00008B; font-size:150%;"><u>GPU Accelerated </br>Motion Detector</u></span>]]<br />
| image = <center>{{Template:RidgeRunlogo}}</center><br />
| headingstyle = border-top: 2px solid; border-top-color: gray; font-size:larger; background-color:#63a3ff;<br />
| contentstyle = <br />
| contentclass = hlist<br />
| heading1 = [[GPU Accelerated Motion Detector/Overview|Overview]]<br />
| content1 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content1 = <br />
*[[GPU Accelerated Motion Detector/Overview/Architecture| Architecture]]<br />
*[[GPU Accelerated Motion Detector/Overview/Supported Platforms| Supported Platforms]]<br />
*[[GPU Accelerated Motion Detector/Overview/Capabilities| Capabilities]]<br />
}}<br />
<br />
| heading2 = [[GPU Accelerated Motion Detector/GStreamer Plugins|GStreamer Plugins]]<br />
| content2 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content2 =<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection| Motion Detection]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Noise Reduction | Noise Reduction]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/CPU Blob Detector | CPU Blob Detector]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/GPU Blob Detector | GPU Blob Detector ]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Overlay | Motion Overlay]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin | Motion Detection Bin]]<br />
}}<br />
<br />
| heading3 = [[GPU Accelerated Motion Detector/Getting Started|Getting Started]]<br />
| content3 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content3 =<br />
*[[GPU Accelerated Motion Detector/Getting Started/Dependencies| Dependencies]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Evaluating GPUMotionDetector | Evaluating <br />
GPUMotionDetector]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Building GPUMotionDetector|Building <br />
GPUMotionDetector]]<br />
}}<br />
<br />
| heading4 = [[GPU Accelerated Motion Detector/Examples|Examples]]<br />
| content4 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content4 = <br />
*[[GPU Accelerated Motion Detector/Examples/GStreamer Pipelines | GStreamer Pipelines]]<br />
}}<br />
<br />
| heading5 = [[GPU Accelerated Motion Detector/Performance|Performance]]<br />
| content5 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content5 = <br />
*[[GPU Accelerated Motion Detector/Performance/Xavier NX | Jetson Xavier NX]]<br />
}}<br />
| heading6 = [[GPU Accelerated Motion Detector/Troubleshooting | Troubleshooting]]<br />
<br />
| heading7 = [[GPU Accelerated Motion Detector/FAQ |FAQ]]<br />
<br />
| heading8 = [[GPU Accelerated Motion Detector/Contact_Us|<span style="color:#00008B;font-size:larger;">'''<u>Contact Us</u>'''</span>]]<br />
}}<br />
<br />
<noinclude><br />
[[Category:GPU Accelerated Motion Detector Templates]]<br />
[[Category:TOCs using Sidebar]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GPU_Accelerated_Motion_Detector/TOC&diff=48385Template:GPU Accelerated Motion Detector/TOC2023-03-27T16:16:22Z<p>Mmontero: </p>
<hr />
<div>{{Sidebar<br />
| title = [[GPU Accelerated Motion Detector|<span style="color:#00008B; font-size:150%;"><u>GPU Accelerated </br>Motion Detector</u></span>]]<br />
| image = <center>{{Template:RidgeRunlogo}}</center><br />
| headingstyle = border-top: 2px solid; border-top-color: gray; font-size:larger; background-color:#63a3ff;<br />
| contentstyle = <br />
| contentclass = hlist<br />
| heading1 = [[GPU Accelerated Motion Detector/Overview|Overview]]<br />
| content1 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content1 = <br />
*[[GPU Accelerated Motion Detector/Overview/Architecture| Architecture]]<br />
*[[GPU Accelerated Motion Detector/Overview/Supported Platforms| Supported Platforms]]<br />
*[[GPU Accelerated Motion Detector/Overview/Capabilities| Capabilities]]<br />
}}<br />
<br />
| heading2 = [[GPU Accelerated Motion Detector/GStreamer Plugins|GStreamer Plugins]]<br />
| content2 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content2 =<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection| Motion Detection]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Noise Reduction | Noise Reduction]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/CPU Blob Detector | CPU Blob Detector]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/GPU Blob Detector | GPU Blob Detector ]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Overlay | Motion Overlay]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin | Motion Detection Bin]]<br />
}}<br />
<br />
| heading3 = [[GPU Accelerated Motion Detector/Getting Started|Getting Started]]<br />
| content3 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content3 =<br />
*[[GPU Accelerated Motion Detector/Getting Started/Dependencies| Dependencies]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Evaluating GPUMotionDetector | Evaluating <br />
GPUMotionDetector]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Building GPUMotionDetector|Building <br />
GPUMotionDetector]]<br />
}}<br />
<br />
| heading4 = [[GPU Accelerated Motion Detector/Examples|Examples]]<br />
| content4 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content4 = <br />
*[[GPU Accelerated Motion Detector/Examples/Gstreamer Pipelines | GStreamer Pipelines]]<br />
}}<br />
<br />
| heading5 = [[GPU Accelerated Motion Detector/Performance|Performance]]<br />
| content5 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content5 = <br />
*[[GPU Accelerated Motion Detector/Performance/Xavier NX | Jetson Xavier NX]]<br />
}}<br />
| heading6 = [[GPU Accelerated Motion Detector/Troubleshooting | Troubleshooting]]<br />
<br />
| heading7 = [[GPU Accelerated Motion Detector/FAQ |FAQ]]<br />
<br />
| heading8 = [[GPU Accelerated Motion Detector/Contact_Us|<span style="color:#00008B;font-size:larger;">'''<u>Contact Us</u>'''</span>]]<br />
}}<br />
<br />
<noinclude><br />
[[Category:GPU Accelerated Motion Detector Templates]]<br />
[[Category:TOCs using Sidebar]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48384GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-27T16:15:11Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GPU Accelerated Motion Detector/Head|previous=|next=|metakeywords=Motion Detection Algorithm|metadescription=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE: GStreamer Motion Detection Bin Performance |noerror}}<noinclude><br />
<br />
This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX.<br />
<br />
The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects the RidgeRun element perf and NVIDIA application tegrastats were used. The results of these measurements can be found in the following subsections.<br />
<br />
<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
== Motion Sampling Frecuency ==<br />
<br />
{{GPU Accelerated Motion Detector/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GPU_Accelerated_Motion_Detector/Performance/Xavier_NX&diff=48274GPU Accelerated Motion Detector/Performance/Xavier NX2023-03-24T02:43:57Z<p>Mmontero: Created page with "This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavi..."</p>
<hr />
<div>This section provides performance measurements for the rrmotiondetectionbin operating in various configurations. The measurements were taken using Jetpack 4.5 on a Jetson Xavier NX.<br />
<br />
The following list describes different aspects of performance that were measured during the evaluation:<br />
<br />
* Frame rate: This indicates the number of frames that can be processed per second. <br />
* CPU utilization: This refers to the average percentage of CPU resources used by the pipeline.<br />
* GPU utilization: This refers to the average percentage of GPU resources used by the pipeline.<br />
<br />
To measure these aspects the RidgeRun element perf and NVIDIA application tegrastats were used. The results of these measurements can be found in the following subsections.<br />
<br />
<br />
<br />
== Algorithms Configurations ==<br />
<br />
The table below show the results of testing different motion detection algorithms, blob detection method and changing the processing format.<br />
<br />
* The motion detection algorithm was changed with the property '''motion_detector::motion''' and can take 2 values 'mog' or 'mog2'<br />
* The blob detection method was changed with the property '''cuda-blob-detection''' and can take 2 values true or false, true to use GPU based blob detection or false to use CPU based blob detection.<br />
* The processing format was changed with the property '''grayscale''' and can take 2 values: true or false, true to process the video in grayscale of false to process the value in RGBA.<br />
<br />
The rest of the configurations uses the default values, noise reduction kernel size of 9 and sample frequency of 1.<br />
<br />
=== 4K Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file:<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center;margin: auto; " <br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.91<br />
| 16.769<br />
| 88.397<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 22.098<br />
| 13.617<br />
| 79.529<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.921<br />
| 15.963<br />
| 75.115<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 23.509<br />
| 13.938<br />
| 83.529<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 23.097<br />
| 12<br />
| 90.971<br />
|}<br />
<br />
=== 1080p Input Resolution ===<br />
<br />
The measurements in this subsection were obtained from the following pipeline using a 4K input file downscaled to 1080p<br />
<br />
<pre><br />
gst-launch-1.0 filesrc location=/home/nvidia/street.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM),width=1920,height=1080" ! queue ! rrmotiondetectionbin motion_detector::motion=$MOTION cuda-blob-detection$BLOB_METHOD grayscale=$PROCESS_FORMAT ! perf print-cpu-load=true ! fakesink sync=true<br />
</pre><br />
<br />
The rrmotiondetectionbin configuration were changed to determine its performance with different settings, changed with the corresponding element's properties.<br />
{| class="wikitable" style="text-align: center; margin: auto;"<br />
! colspan="2" | Motion Detection Algorithm<br />
! colspan="2" | Blob Detection Method<br />
! colspan="2" | Processing Format<br />
! rowspan="2" | Framerate<br />
! rowspan="2" | CPU<br />
! rowspan="2" | GPU<br />
|-<br />
! mog<br />
! mog2<br />
! CPU<br />
! GPU<br />
! Grayscale<br />
! RGBA<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.927<br />
| 10.423<br />
| 48.391<br />
|-<br />
| x<br />
|<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 8.74<br />
| 51.807<br />
|-<br />
| <br />
| x<br />
| x<br />
|<br />
| x<br />
|<br />
| 29.925<br />
| 10<br />
| 29.615<br />
|-<br />
| <br />
| x<br />
| <br />
| x<br />
| x<br />
|<br />
| 29.921<br />
| 10.423<br />
| 40.629<br />
|-<br />
| x<br />
|<br />
| x<br />
|<br />
| <br />
| x<br />
| 29.928<br />
| 10<br />
| 51.08<br />
|}<br />
<br />
== Denoiser Kernel Size ==<br />
<br />
== Motion Sampling Frecuency ==</div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GPU_Accelerated_Motion_Detector/TOC&diff=48272Template:GPU Accelerated Motion Detector/TOC2023-03-24T01:49:04Z<p>Mmontero: </p>
<hr />
<div>{{Sidebar<br />
| title = [[GPU Accelerated Motion Detector|<span style="color:#00008B; font-size:150%;"><u>GPU Accelerated </br>Motion Detector</u></span>]]<br />
| image = <center>{{Template:RidgeRunlogo}}</center><br />
| headingstyle = border-top: 2px solid; border-top-color: gray; font-size:larger; background-color:#63a3ff;<br />
| contentstyle = <br />
| contentclass = hlist<br />
| heading1 = [[GPU Accelerated Motion Detector/Overview|Overview]]<br />
| content1 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content1 = <br />
*[[GPU Accelerated Motion Detector/Overview/Architecture| Architecture]]<br />
*[[GPU Accelerated Motion Detector/Overview/Supported Platforms| Supported Platforms]]<br />
*[[GPU Accelerated Motion Detector/Overview/Capabilities| Capabilities]]<br />
}}<br />
<br />
| heading2 = [[GPU Accelerated Motion Detector/GStreamer Plugins|GStreamer Plugins]]<br />
| content2 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content2 =<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection| Motion Detection]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Noise Reduction | Noise Reduction]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/CPU Blob Detector | CPU Blob Detector]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/GPU Blob Detector | GPU Blob Detector ]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Overlay | Motion Overlay]]<br />
*[[GPU Accelerated Motion Detector/GStreamer Plugins/Motion Detection Bin | Motion Detection Bin]]<br />
}}<br />
<br />
| heading3 = [[GPU Accelerated Motion Detector/Getting Started|Getting Started]]<br />
| content3 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content3 =<br />
*[[GPU Accelerated Motion Detector/Getting Started/Dependencies| Dependencies]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Evaluating GPUMotionDetector | Evaluating <br />
GPUMotionDetector]]<br />
*[[GPU Accelerated Motion Detector/Getting Started/Building GPUMotionDetector|Building <br />
GPUMotionDetector]]<br />
}}<br />
<br />
| heading4 = [[GPU Accelerated Motion Detector/Examples|Examples]]<br />
| content4 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content4 = <br />
*[[GPU Accelerated Motion Detector/Examples/Jetson | Jetson]]<br />
}}<br />
<br />
| heading5 = [[GPU Accelerated Motion Detector/Performance|Performance]]<br />
| content5 = {{Sidebar |child=yes<br />
| headingstyle = border-top: 1px solid; border-top-color: gray; font-size:small;<br />
| contentclass = hlist<br />
| content5 = <br />
*[[GPU Accelerated Motion Detector/Performance/Xavier NX | Jetson Xavier NX]]<br />
}}<br />
| heading6 = [[GPU Accelerated Motion Detector/Troubleshooting | Troubleshooting]]<br />
<br />
| heading7 = [[GPU Accelerated Motion Detector/FAQ |FAQ]]<br />
<br />
| heading8 = [[GPU Accelerated Motion Detector/Contact_Us|<span style="color:#00008B;font-size:larger;">'''<u>Contact Us</u>'''</span>]]<br />
}}<br />
<br />
<noinclude><br />
[[Category:GPU Accelerated Motion Detector Templates]]<br />
[[Category:TOCs using Sidebar]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Kinesiswebrtcbin_send_audio_only.jpg&diff=44302File:Kinesiswebrtcbin send audio only.jpg2022-10-20T23:39:35Z<p>Mmontero: Mmontero uploaded a new version of File:Kinesiswebrtcbin send audio only.jpg</p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Jetson_Xavier_NX/Development/Building_the_Kernel_from_Source&diff=44173Jetson Xavier NX/Development/Building the Kernel from Source2022-10-18T15:20:27Z<p>Mmontero: /* Jetson Xavier NX Recovery Mode */</p>
<hr />
<div><noinclude><br />
{{JetsonXavierNX/Head|previous=Introduction/Getting_Started|next=GStreamer/Accelerated_Elements|keywords=jetson xavier nx kernel, DTB, build, compile, flashing, clone,clonning the image,compiling the kernel,downloading the kernel source,downloading the kernel,kernel source,source code|title=How to build NVIDIA Jetson Xavier NX kernel|description=This page provides the steps to compile,build and flashing the kernel,DTBs and to clone a sdcard image}}<br />
</noinclude><br />
<br />
<!--<br />
{{DISPLAYTITLE:Jetson Xavier NX - Building Kernel from Source|noerror}}<br />
--><br />
<br />
== Introduction to compiling NVIDIA Tegra Jetson Xavier NX source code L4T 32.4.3 ==<br />
<br />
This wiki page contains instructions to download and build kernel source code for Jetson Xavier NX, several parts of this wiki were based in the document: [https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3243/index.html NVIDIA Jetson Linux Developer Guide 32.4.3 release].<br />
<br />
L4T 32.4.3 is used by [https://developer.nvidia.com/embedded/jetpack JetPack 4.4].<br />
<br />
== Build NVIDIA Jetson Xavier NX kernel source code ==<br />
<br />
==={{Download}} Download and install the Toolchain===<br />
<br />
NVIDIA recommends using the Linaro 7.3.1 2018.05 toolchain for L4T 32.4.3 ([https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3243/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fxavier_toolchain.html Jetson Linux Driver PackageToolchain])<br />
<br />
Download the pre-built toolchain binaries: [http://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz] and locate them under '''$HOME/l4t-gcc''', or alternatively execute on console:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
mkdir -p $HOME/l4t-gcc<br />
cd $HOME/l4t-gcc<br />
wget http://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz<br />
</syntaxhighlight><br />
<br />
Execute the following commands to extract the toolchain:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
tar xf gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz<br />
</syntaxhighlight><br />
<br />
==={{Download}} Download the kernel sources for L4T 32.4.3===<br />
==== Manually downloading sources from NVIDIA Jetson Download Center ====<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
mkdir -p $HOME/l4t-sources/xavier-nx/<br />
cd $HOME/l4t-sources/xavier-nx<br />
wget https://developer.nvidia.com/embedded/L4T/r32_Release_v4.3/sources/T186/public_sources.tbz2 # be sure to download the correct sources, since on JP4.3 the only changes are between nano-tx1/tx2-xavier-nx<br />
</syntaxhighlight><br />
<br />
Execute the following commands to extract the kernel sources:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
tar -xvf public_sources.tbz2<br />
cd Linux_for_Tegra/source/public<br />
JETSON_XAVIER_NX_KERNEL_SOURCES=$(pwd)<br />
tar -xf kernel_src.tbz2<br />
</syntaxhighlight><br />
<br />
Apply corresponding patches (if any) and follow to the next section.<br />
<br />
==== Obtaining the Kernel Sources with Git ====<br />
You can use the "source_sync.sh" script under the NVIDIA Jetpack 4.4 "Linux_for_Tegra" installation directory.<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
# $NVIDIA_JP_4.4_INSTALL_DIR is the installation directory defined when you installed NVIDIA Jetpack with the sdkmanager tool.<br />
cd $NVIDIA_JP_4.4_INSTALL_DIR/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/<br />
./source_sync.sh<br />
JETSON_XAVIER_NX_KERNEL_SOURCES=$NVIDIA_JP_4.4_INSTALL_DIR/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/sources/<br />
</syntaxhighlight><br />
<br />
This will download the bootloader and kernel.<br />
<br />
When syncing, you'll be asked for a tag, lets use '''tegra-l4t-r32.4.3''' for both Kernel and uboot.<br />
<br />
Apply corresponding patches (if any) and follow to the next section.<br />
<br />
===Compile Jetson Xavier NX kernel and dtb ('''d'''evice '''t'''ree '''b'''lob)===<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
cd $JETSON_XAVIER_NX_KERNEL_SOURCES<br />
TOOLCHAIN_PREFIX=$HOME/l4t-gcc/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-<br />
TEGRA_KERNEL_OUT=$JETSON_XAVIER_NX_KERNEL_SOURCES/build<br />
KERNEL_MODULES_OUT=$JETSON_XAVIER_NX_KERNEL_SOURCES/modules<br />
<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} tegra_defconfig<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} menuconfig<br />
</syntaxhighlight><br />
<br />
here you have to select the modules you want to compile, for example for libguvc something like this is required:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
-> Device Drivers<br />
-> USB support (USB_SUPPORT [=y]) <br />
-> USB Gadget Support (USB_GADGET [=y]) <br />
-> USB Gadget Drivers (<choice> [=m]) <br />
</syntaxhighlight><br />
<br />
Now compile the kernel image, dtbs and modules:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) Image<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) dtbs<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) modules<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra INSTALL_MOD_PATH=$KERNEL_MODULES_OUT modules_install<br />
</syntaxhighlight><br />
<br />
== Install L4T 32.4.3 kernel image on the Jetson Xavier NX ==<br />
<br />
This guide assumes that the user already has the JetPack 4.4 Jetson Xavier NX installed with NVIDIA sdk-manager. This link contains details about how to [https://docs.nvidia.com/sdk-manager/download-run-sdkm/index.html install] NVIDIA [https://developer.nvidia.com/nvidia-sdk-manager SDK Manager].<br />
<br />
'''$NVIDIA_JP_4.4_INSTALL_DIR''' is the installation directory defined when you installed NVIDIA Jetpack with the sdk-manager tool.<br />
<br />
Export the following environment variable:<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
export JETPACK_4_4_L4T=${NVIDIA_JP_4.4_INSTALL_DIR}/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/<br />
</syntaxhighlight><br />
<br />
=== Copy kernel, device tree, and modules into JetPack 4.4 Jetson Xavier NX installation directory ===<br />
<br />
<source lang="bash"><br />
cd ${JETPACK_4_4_L4T}<br />
# OPTIONAL: Make a backup of the generic JP 4.4 kernel directory for easy "go-back" (execute this only once)<br />
cp -rfv kernel/ kernel-bk-orig<br />
# Copy kernel generated<br />
cp -rfv $JETSON_XAVIER_NX_KERNEL_SOURCES/build/arch/arm64/boot/Image kernel/<br />
# Copy device tree generated<br />
cp -rfv $JETSON_XAVIER_NX_KERNEL_SOURCES/build/arch/arm64/boot/dts/* kernel/dtb/<br />
# OPTIONAL: Make a backup of the generic JP 4.4 rootfs directory for easy "go-back" (execute this only once)<br />
cp -rf rootfs rootfs-bk-orig<br />
# Copy new modules<br />
sudo cp -arfv $JETSON_XAVIER_NX_KERNEL_SOURCES/modules/lib rootfs/<br />
</source><br />
<br />
=== Jetson Xavier NX Recovery Mode===<br />
* Ensure the device is powered off and the power adapter disconnected.<br />
* Verify the microSD Card is inserted in the Jetson Xavier NX [developer kit version] module.<br />
* Place a jumper across the Force Recovery Mode pins. These are pins 9 [GND] and 10 [FC REC] of the Button Header [J14].<br />
<br />
[[File:Jetson Xavier NX Recovery.jpeg|thumb|800px|center]]<br />
<br />
* Connect your host computer to the device's USB Micro-B connector.<br />
* Connect the power adapter to the Power Jack [J16].<br />
* The device will automatically power on in Force Recovery Mode.<br />
* Execute on a terminal of your host PC the "lsusb" command and you will see the following output to confirm that the Jetson is in recovery mode:<br />
** ''Bus 001 Device 010: ID 0955:7e19 NVidia Corp.''<br />
* Remove the jumper from the Force Recovery Mode pins.<br />
<br />
=== Flash full system (rootfs, kernel and dtb) ===<br />
Make sure your Jetson Xavier NX is in recovery mode.<br />
<br />
For this it's required to use NVIDIA's script 'flash.sh':<br />
<br />
For '''Jetson Xavier NX P3668-0000''' (devkit for development; not for production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and microSD card<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh jetson-xavier-nx-devkit mmcblk0p1<br />
</source><br />
<br />
For '''Jetson Xavier NX P3668-0001''' (For Development or production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and eMMC<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh jetson-xavier-nx-devkit-emmc mmcblk0p1<br />
</source><br />
<br />
=== Flash DTB only ===<br />
Make sure your Jetson Xavier NX is in recovery mode.<br />
<br />
For this it's required to use NVIDIA's script 'flash.sh':<br />
<br />
For '''Jetson Xavier NX P3668-0000''' (devkit for development; not for production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and microSD card<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh -r -k kernel-dtb jetson-xavier-nx-devkit mmcblk0p1<br />
</source><br />
<br />
For '''Jetson Xavier NX P3668-0001''' (For Development or production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and eMMC<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh -r -k kernel-dtb jetson-xavier-nx-devkit-emmc mmcblk0p1<br />
</source><br />
<br />
=== Flash Kernel Image only ===<br />
The image flashed to the kernel partition is actually a U‑Boot image. U‑Boot loads the Linux kernel from /boot/Image in the root file system.<br />
<br />
For this reason, you cannot update the Linux kernel image using the ‑k kernel switch. You may update /boot/Image by either of these means:<br />
* Modify /boot/extlinux/extlinux.conf to add a new boot entry.<br />
* Follow the instructions and example provided in /boot/extlinux/extlinux.conf. By this means you can always use cp or scp to replace /boot/Image with a custom-built kernel and launch it with U‑Boot.<br />
* Generate a backup of the actual "/boot/Image" file by renaming it (/boot/Image.BK_ORIG for example), and cp or scp your custom kernel Image file to your Jetson device "/boot" directory. When the board reboots, it will load the new Kernel Image. <br />
<br />
So, you just have to copy your custom kernel Image into the Jetson Xavier NX "/boot" directory of the rootfs partition.<br />
<br />
You can check it by executing "uname -a" command after the device boots, and check the build date&hour of the kernel Image. Also, you can verify it with "sha1sum" command.<br />
<br />
For example:<br />
<pre><br />
nvidia@nvidia-desktop:/boot$ uname -a<br />
Linux nvidia-desktop 4.9.140-tegra #1 SMP PREEMPT Fri Aug 7 19:48:20 CST 2020 aarch64 aarch64 aarcx<br />
<br />
nvidia@nvidia-desktop:/boot$ sha1sum Image<br />
4a27eaf20c2e6f8ac58d122027778fdcb8cbd3af Image<br />
</pre><br />
<br />
Remember to always copy the '''custom kernel modules''' built directory to your Jetson device '''"/lib/modules/"''' directory. Also, remember to flash your '''custom kernel DTB''' changes, so your custom Kernel Image boots without problems. <br />
<br />
Please always verify that your custom kernel image sufix (4.9.140'''-tegra''') matches with the modules directory name as shown below:<br />
<pre><br />
nvidia@nvidia-desktop:~$ ls /lib/modules/<br />
4.9.140-tegra<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{JetsonXavierNX/Foot|Introduction/Getting_Started|GStreamer/Accelerated_Elements}}<br />
</noinclude><br />
[[Category:JetsonXavierNX]][[Category:Jetson]]</div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Jetson_Xavier_NX/Development/Building_the_Kernel_from_Source&diff=44172Jetson Xavier NX/Development/Building the Kernel from Source2022-10-18T15:18:26Z<p>Mmontero: /* Jetson Xavier NX Recovery Mode */</p>
<hr />
<div><noinclude><br />
{{JetsonXavierNX/Head|previous=Introduction/Getting_Started|next=GStreamer/Accelerated_Elements|keywords=jetson xavier nx kernel, DTB, build, compile, flashing, clone,clonning the image,compiling the kernel,downloading the kernel source,downloading the kernel,kernel source,source code|title=How to build NVIDIA Jetson Xavier NX kernel|description=This page provides the steps to compile,build and flashing the kernel,DTBs and to clone a sdcard image}}<br />
</noinclude><br />
<br />
<!--<br />
{{DISPLAYTITLE:Jetson Xavier NX - Building Kernel from Source|noerror}}<br />
--><br />
<br />
== Introduction to compiling NVIDIA Tegra Jetson Xavier NX source code L4T 32.4.3 ==<br />
<br />
This wiki page contains instructions to download and build kernel source code for Jetson Xavier NX, several parts of this wiki were based in the document: [https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3243/index.html NVIDIA Jetson Linux Developer Guide 32.4.3 release].<br />
<br />
L4T 32.4.3 is used by [https://developer.nvidia.com/embedded/jetpack JetPack 4.4].<br />
<br />
== Build NVIDIA Jetson Xavier NX kernel source code ==<br />
<br />
==={{Download}} Download and install the Toolchain===<br />
<br />
NVIDIA recommends using the Linaro 7.3.1 2018.05 toolchain for L4T 32.4.3 ([https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3243/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fxavier_toolchain.html Jetson Linux Driver PackageToolchain])<br />
<br />
Download the pre-built toolchain binaries: [http://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz] and locate them under '''$HOME/l4t-gcc''', or alternatively execute on console:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
mkdir -p $HOME/l4t-gcc<br />
cd $HOME/l4t-gcc<br />
wget http://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz<br />
</syntaxhighlight><br />
<br />
Execute the following commands to extract the toolchain:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
tar xf gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz<br />
</syntaxhighlight><br />
<br />
==={{Download}} Download the kernel sources for L4T 32.4.3===<br />
==== Manually downloading sources from NVIDIA Jetson Download Center ====<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
mkdir -p $HOME/l4t-sources/xavier-nx/<br />
cd $HOME/l4t-sources/xavier-nx<br />
wget https://developer.nvidia.com/embedded/L4T/r32_Release_v4.3/sources/T186/public_sources.tbz2 # be sure to download the correct sources, since on JP4.3 the only changes are between nano-tx1/tx2-xavier-nx<br />
</syntaxhighlight><br />
<br />
Execute the following commands to extract the kernel sources:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
tar -xvf public_sources.tbz2<br />
cd Linux_for_Tegra/source/public<br />
JETSON_XAVIER_NX_KERNEL_SOURCES=$(pwd)<br />
tar -xf kernel_src.tbz2<br />
</syntaxhighlight><br />
<br />
Apply corresponding patches (if any) and follow to the next section.<br />
<br />
==== Obtaining the Kernel Sources with Git ====<br />
You can use the "source_sync.sh" script under the NVIDIA Jetpack 4.4 "Linux_for_Tegra" installation directory.<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
# $NVIDIA_JP_4.4_INSTALL_DIR is the installation directory defined when you installed NVIDIA Jetpack with the sdkmanager tool.<br />
cd $NVIDIA_JP_4.4_INSTALL_DIR/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/<br />
./source_sync.sh<br />
JETSON_XAVIER_NX_KERNEL_SOURCES=$NVIDIA_JP_4.4_INSTALL_DIR/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/sources/<br />
</syntaxhighlight><br />
<br />
This will download the bootloader and kernel.<br />
<br />
When syncing, you'll be asked for a tag, lets use '''tegra-l4t-r32.4.3''' for both Kernel and uboot.<br />
<br />
Apply corresponding patches (if any) and follow to the next section.<br />
<br />
===Compile Jetson Xavier NX kernel and dtb ('''d'''evice '''t'''ree '''b'''lob)===<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
cd $JETSON_XAVIER_NX_KERNEL_SOURCES<br />
TOOLCHAIN_PREFIX=$HOME/l4t-gcc/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-<br />
TEGRA_KERNEL_OUT=$JETSON_XAVIER_NX_KERNEL_SOURCES/build<br />
KERNEL_MODULES_OUT=$JETSON_XAVIER_NX_KERNEL_SOURCES/modules<br />
<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} tegra_defconfig<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} menuconfig<br />
</syntaxhighlight><br />
<br />
here you have to select the modules you want to compile, for example for libguvc something like this is required:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
-> Device Drivers<br />
-> USB support (USB_SUPPORT [=y]) <br />
-> USB Gadget Support (USB_GADGET [=y]) <br />
-> USB Gadget Drivers (<choice> [=m]) <br />
</syntaxhighlight><br />
<br />
Now compile the kernel image, dtbs and modules:<br />
<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) Image<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) dtbs<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} -j$(nproc) modules<br />
make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra INSTALL_MOD_PATH=$KERNEL_MODULES_OUT modules_install<br />
</syntaxhighlight><br />
<br />
== Install L4T 32.4.3 kernel image on the Jetson Xavier NX ==<br />
<br />
This guide assumes that the user already has the JetPack 4.4 Jetson Xavier NX installed with NVIDIA sdk-manager. This link contains details about how to [https://docs.nvidia.com/sdk-manager/download-run-sdkm/index.html install] NVIDIA [https://developer.nvidia.com/nvidia-sdk-manager SDK Manager].<br />
<br />
'''$NVIDIA_JP_4.4_INSTALL_DIR''' is the installation directory defined when you installed NVIDIA Jetpack with the sdk-manager tool.<br />
<br />
Export the following environment variable:<br />
<syntaxhighlight lang="bash" style="background-color:cornsilk"><br />
export JETPACK_4_4_L4T=${NVIDIA_JP_4.4_INSTALL_DIR}/JetPack_4.4_Linux_JETSON_XAVIER_NX_DEVKIT/Linux_for_Tegra/<br />
</syntaxhighlight><br />
<br />
=== Copy kernel, device tree, and modules into JetPack 4.4 Jetson Xavier NX installation directory ===<br />
<br />
<source lang="bash"><br />
cd ${JETPACK_4_4_L4T}<br />
# OPTIONAL: Make a backup of the generic JP 4.4 kernel directory for easy "go-back" (execute this only once)<br />
cp -rfv kernel/ kernel-bk-orig<br />
# Copy kernel generated<br />
cp -rfv $JETSON_XAVIER_NX_KERNEL_SOURCES/build/arch/arm64/boot/Image kernel/<br />
# Copy device tree generated<br />
cp -rfv $JETSON_XAVIER_NX_KERNEL_SOURCES/build/arch/arm64/boot/dts/* kernel/dtb/<br />
# OPTIONAL: Make a backup of the generic JP 4.4 rootfs directory for easy "go-back" (execute this only once)<br />
cp -rf rootfs rootfs-bk-orig<br />
# Copy new modules<br />
sudo cp -arfv $JETSON_XAVIER_NX_KERNEL_SOURCES/modules/lib rootfs/<br />
</source><br />
<br />
=== Jetson Xavier NX Recovery Mode===<br />
* Ensure the device is powered off and the power adapter disconnected.<br />
* Verify the microSD Card is inserted in the Jetson Xavier NX [developer kit version] module.<br />
* Place a jumper across the Force Recovery Mode pins. These are pins 9 [GND] and 10 [FC REC] of the Button Header [J14].<br />
<br />
[[File:Jetson Xavier NX Recovery.jpeg|thumb|800px]]<br />
* Connect your host computer to the device's USB Micro-B connector.<br />
* Connect the power adapter to the Power Jack [J16].<br />
* The device will automatically power on in Force Recovery Mode.<br />
* Execute on a terminal of your host PC the "lsusb" command and you will see the following output to confirm that the Jetson is in recovery mode:<br />
** ''Bus 001 Device 010: ID 0955:7e19 NVidia Corp.''<br />
* Remove the jumper from the Force Recovery Mode pins.<br />
<br />
=== Flash full system (rootfs, kernel and dtb) ===<br />
Make sure your Jetson Xavier NX is in recovery mode.<br />
<br />
For this it's required to use NVIDIA's script 'flash.sh':<br />
<br />
For '''Jetson Xavier NX P3668-0000''' (devkit for development; not for production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and microSD card<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh jetson-xavier-nx-devkit mmcblk0p1<br />
</source><br />
<br />
For '''Jetson Xavier NX P3668-0001''' (For Development or production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and eMMC<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh jetson-xavier-nx-devkit-emmc mmcblk0p1<br />
</source><br />
<br />
=== Flash DTB only ===<br />
Make sure your Jetson Xavier NX is in recovery mode.<br />
<br />
For this it's required to use NVIDIA's script 'flash.sh':<br />
<br />
For '''Jetson Xavier NX P3668-0000''' (devkit for development; not for production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and microSD card<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh -r -k kernel-dtb jetson-xavier-nx-devkit mmcblk0p1<br />
</source><br />
<br />
For '''Jetson Xavier NX P3668-0001''' (For Development or production use):<br />
<source lang="bash"><br />
# Flashes QSPI-NOR and eMMC<br />
cd ${JETPACK_4_4_L4T}<br />
sudo ./flash.sh -r -k kernel-dtb jetson-xavier-nx-devkit-emmc mmcblk0p1<br />
</source><br />
<br />
=== Flash Kernel Image only ===<br />
The image flashed to the kernel partition is actually a U‑Boot image. U‑Boot loads the Linux kernel from /boot/Image in the root file system.<br />
<br />
For this reason, you cannot update the Linux kernel image using the ‑k kernel switch. You may update /boot/Image by either of these means:<br />
* Modify /boot/extlinux/extlinux.conf to add a new boot entry.<br />
* Follow the instructions and example provided in /boot/extlinux/extlinux.conf. By this means you can always use cp or scp to replace /boot/Image with a custom-built kernel and launch it with U‑Boot.<br />
* Generate a backup of the actual "/boot/Image" file by renaming it (/boot/Image.BK_ORIG for example), and cp or scp your custom kernel Image file to your Jetson device "/boot" directory. When the board reboots, it will load the new Kernel Image. <br />
<br />
So, you just have to copy your custom kernel Image into the Jetson Xavier NX "/boot" directory of the rootfs partition.<br />
<br />
You can check it by executing "uname -a" command after the device boots, and check the build date&hour of the kernel Image. Also, you can verify it with "sha1sum" command.<br />
<br />
For example:<br />
<pre><br />
nvidia@nvidia-desktop:/boot$ uname -a<br />
Linux nvidia-desktop 4.9.140-tegra #1 SMP PREEMPT Fri Aug 7 19:48:20 CST 2020 aarch64 aarch64 aarcx<br />
<br />
nvidia@nvidia-desktop:/boot$ sha1sum Image<br />
4a27eaf20c2e6f8ac58d122027778fdcb8cbd3af Image<br />
</pre><br />
<br />
Remember to always copy the '''custom kernel modules''' built directory to your Jetson device '''"/lib/modules/"''' directory. Also, remember to flash your '''custom kernel DTB''' changes, so your custom Kernel Image boots without problems. <br />
<br />
Please always verify that your custom kernel image sufix (4.9.140'''-tegra''') matches with the modules directory name as shown below:<br />
<pre><br />
nvidia@nvidia-desktop:~$ ls /lib/modules/<br />
4.9.140-tegra<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{JetsonXavierNX/Foot|Introduction/Getting_Started|GStreamer/Accelerated_Elements}}<br />
</noinclude><br />
[[Category:JetsonXavierNX]][[Category:Jetson]]</div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Jetson_Xavier_NX_Recovery.jpeg&diff=44171File:Jetson Xavier NX Recovery.jpeg2022-10-18T15:16:24Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Getting_the_code&diff=43940GstKinesisWebRTC/Getting the code2022-10-11T14:26:02Z<p>Mmontero: Created page with "<noinclude> {{GstKinesisWebRTC/Head|previous=|next=|}} </noinclude> You can purchase GstKinesisWebRTC, with full source code, from the [https://www.ridgerun.com/ RidgeRun Sto..."</p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
You can purchase GstKinesisWebRTC, with full source code, from the [https://www.ridgerun.com/ RidgeRun Store] or using the Shopping Cart shown below.<br />
<br />
RidgeRun also makes a binary-only evaluation version available. Please refer to [[GstKinesisWebRTC/Contact_Us| Contact Us]] to get an evaluation binary.<br />
<br />
<br />
{{Review|add shopping cart|}}<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Getting_Started/AWS_Setup&diff=43939GstKinesisWebRTC/Getting Started/AWS Setup2022-10-11T14:12:48Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
== AWS Account ==<br />
Since the element uses the Amazon Kinesis Video Streams with WebRTC, before you can use the element you need to set up an AWS account and get AWS security credentials. You will need the credentials to connect to the WebRTC signaling server. If you already have an account and credentials you can skip this step, otherwise please check the following AWS documentation to set it up.<br />
*[https://docs.aws.amazon.com/kinesisvideostreams-webrtc-dg/latest/devguide/gs-account.html#gs-account-create| Sign Up for AWS ]<br />
*[https://docs.aws.amazon.com/kinesisvideostreams-webrtc-dg/latest/devguide/gs-account.html#gs-account-user| Create an Administrator IAM User ]<br />
*[https://docs.aws.amazon.com/kinesisvideostreams-webrtc-dg/latest/devguide/gs-account.html#gs-account-key| Create an AWS Account Key ]<br />
<br />
==Signaling Channel==<br />
You can use the Kinesis Video Streams console to manage and create your signaling channels prior to using the kinesiswebrtcbin.. However if the configured signaling channel doesn’t exist when the kinesiswebrtcbin start it will create the required channel for you. <br />
<br />
To setup the signaling channel from the AWS Management Console follow the next steps:<br />
<br />
1. Open the AWS Management Console and navigate to the Kinesis Video Streams service (you can use the search box to find it):<br />
[[File:Aws-kinesis-video-stream-service.png|thumb|800px|center| Figure 1. AWS console, Kinesis Video Streams service ]]<br />
<br />
2. Select Signaling channels from the panel on the left of the screen<br />
<br />
<br />
[[File:Aws-signaling-channels-option.png|thumb|800px|center| Figure 2. Signaling channels window ]]<br />
<br />
<br />
3. Verify that you are in the right/desired region from the top right corner of the console. The channel will be created in the selected region.<br />
<br />
<br />
[[File:Aws-region.png|thumb|800px|center| Figure 3. AWS region selection ]]<br />
<br />
<br />
4. Select the top-right orange button Create signaling channel and you will get a setup window (see figure below). Set the desired channel name and choose the bottom-right orage button Create signaling channel to confirm the channel creation. <br />
<br />
<br />
[[File:Aws-new-signaling-channel.png|thumb|800px|center| Figure 4. AWS create a new signaling channel ]]<br />
<br />
<br />
<br />
5. You are ready, now you have a new signaling channel<br />
<br />
<br />
[[File:Aws-signaling-channel-info.png|thumb|800px|center| Figure 5. AWS signaling channel information]]<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Aws-signaling-channel-info.png&diff=43938File:Aws-signaling-channel-info.png2022-10-11T14:11:51Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Aws-new-signaling-channel.png&diff=43937File:Aws-new-signaling-channel.png2022-10-11T14:10:43Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Aws-region.png&diff=43936File:Aws-region.png2022-10-11T14:08:08Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Aws-signaling-channels-option.png&diff=43935File:Aws-signaling-channels-option.png2022-10-11T14:07:01Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=File:Aws-kinesis-video-stream-service.png&diff=43934File:Aws-kinesis-video-stream-service.png2022-10-11T14:02:40Z<p>Mmontero: </p>
<hr />
<div></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Building_GstKinesisWebRTC/Verify&diff=43899GstKinesisWebRTC/Building GstKinesisWebRTC/Verify2022-10-10T12:48:43Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC: Verify|noerror}}<br />
<br />
The plugin installation can be verified by running:<br />
<br />
<syntaxhighlight lang='bash'><br />
gst-inspect-1.0 kinesiswebrtcbin<br />
</syntaxhighlight><br />
<br />
<br />
You should get and output like the following<br />
<br />
<syntaxhighlight lang='bash'><br />
<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Kinesis WebRTC Bin<br />
Klass Filter/Network/WebRTC<br />
Description Bin to handle Amazon Kinesis WebRTC connections<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name kinesiswebrtc<br />
Description Kinesis WebRTC plugin<br />
Filename /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstkinesiswebrtc.so<br />
Version 0.1.0<br />
License Proprietary<br />
Source module gst-kinesis-webrtc<br />
Binary package gst-kinesis-webrtc source release<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstKinesisWebrtc<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SRC template: 'audiosrc_%u'<br />
Availability: Sometimes<br />
Capabilities:<br />
audio/x-opus<br />
<br />
SRC template: 'videosrc_%u'<br />
Availability: Sometimes<br />
:...skipping...<br />
Factory Details:<br />
Rank none (0)<br />
Long-name Kinesis WebRTC Bin<br />
Klass Filter/Network/WebRTC<br />
Description Bin to handle Amazon Kinesis WebRTC connections<br />
Author Melissa Montero <melissa.montero@ridgerun.com><br />
<br />
Plugin Details:<br />
Name kinesiswebrtc<br />
Description Kinesis WebRTC plugin<br />
Filename /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstkinesiswebrtc.so<br />
Version 0.0.1<br />
License Proprietary<br />
Source module gst-kinesis-webrtc<br />
Binary package gst-kinesis-webrtc source release<br />
Origin URL www.ridgerun.com<br />
<br />
GObject<br />
+----GInitiallyUnowned<br />
+----GstObject<br />
+----GstElement<br />
+----GstBin<br />
+----GstKinesisWebrtc<br />
<br />
Implemented Interfaces:<br />
GstChildProxy<br />
<br />
Pad Templates:<br />
SRC template: 'audiosrc_%u'<br />
Availability: Sometimes<br />
Capabilities:<br />
audio/x-opus<br />
<br />
SRC template: 'videosrc_%u'<br />
Availability: Sometimes<br />
Capabilities:<br />
video/x-vp8<br />
profile: { (string)0, (string)1, (string)2, (string)3 }<br />
<br />
SINK template: 'audiosink'<br />
Availability: On request<br />
Capabilities:<br />
audio/x-opus<br />
<br />
SINK template: 'videosink'<br />
Availability: On request<br />
Capabilities:<br />
video/x-vp8<br />
profile: { (string)0, (string)1, (string)2, (string)3 }<br />
<br />
Element has no clocking capabilities.<br />
Element has no URI handling capabilities.<br />
<br />
Pads:<br />
none<br />
<br />
Element Properties:<br />
async-handling : The bin will handle Asynchronous state changes<br />
flags: readable, writable<br />
Boolean. Default: false<br />
ca-certificate : Path to SSL CA certificate<br />
flags: readable, writable, changeable only in NULL or READY state<br />
String. Default: null<br />
channel : Name of the signaling channel<br />
flags: readable, writable, changeable only in NULL or READY state<br />
String. Default: null<br />
message-forward : Forwards all children messages<br />
flags: readable, writable<br />
Boolean. Default: false<br />
name : The name of the object<br />
flags: readable, writable<br />
String. Default: "kinesiswebrtc0"<br />
parent : The parent of the object<br />
flags: readable, writable<br />
Object of type "GstObject"<br />
region : AWS region in which the signaling channel will be opened<br />
flags: readable, writable, changeable only in NULL or READY state<br />
String. Default: "us-west-2"<br />
<br />
Element Signals:<br />
"pad-added" : void user_function (GstElement* object,<br />
GstPad* arg0,<br />
gpointer user_data);<br />
"pad-removed" : void user_function (GstElement* object,<br />
GstPad* arg0,<br />
gpointer user_data);<br />
"no-more-pads" : void user_function (GstElement* object,<br />
gpointer user_data);<br />
"peer-connected" : void user_function (GstElement* object,<br />
gchararray arg0,<br />
gpointer user_data);<br />
"peer-disconnected" : void user_function (GstElement* object,<br />
gchararray arg0,<br />
gpointer user_data);<br />
<br />
</syntaxhighlight><br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Building_GstKinesisWebRTC/Dependencies&diff=43898GstKinesisWebRTC/Building GstKinesisWebRTC/Dependencies2022-10-10T12:48:10Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC: Dependencies|noerror}}<br />
<br />
=== GStreamer ===<br />
The project needs gstreamer core, gstramer plugins base and gstreamer app. You can installed with:<br />
<syntaxhighlight lang=bash><br />
sudo apt-get install libgstreamer1.0-0 libgstreamer-plugins-base1.0-0 gstreamer1.0-plugins-base-apps<br />
</syntaxhighlight><br />
<br />
=== OpenSSL ===<br />
Can be installed with:<br />
<syntaxhighlight lang=bash><br />
sudo apt install libssl-dev<br />
</syntaxhighlight><br />
<br />
=== Quilt ===<br />
Can be installed with:<br />
<pre><br />
sudo apt install quilt<br />
</pre><br />
<br />
=== Cmake ===<br />
Can be installed with:<br />
<pre><br />
sudo apt install cmake<br />
</pre><br />
<br />
=== Websockets ===<br />
Websockets should be installed manually since the required version is newer than Ubuntu package version. You can follow the next steps:<br />
<br />
* Download the code:<br />
<syntaxhighlight lang=bash><br />
git clone https://github.com/warmcat/libwebsockets.git<br />
cd libwebsockets<br />
git checkout v4.2.2<br />
</syntaxhighlight><br />
<br />
*Apply patches included in amazon-kinesis-video-streams-webrtc-sdk-c<br />
<syntaxhighlight lang=bash><br />
mkdir patches<br />
cp ../amazon-kinesis-video-streams-webrtc-sdk-c/CMake/Dependencies/libwebsockets-old-gcc-fix-cast-cmakelists.patch patches/<br />
cp ../amazon-kinesis-video-streams-webrtc-sdk-c/CMake/Dependencies/libwebsockets-leak-pipe-fix.patch patches/<br />
echo libwebsockets-old-gcc-fix-cast-cmakelists.patch >> patches/series<br />
echo libwebsockets-leak-pipe-fix.patch >> patches/series<br />
quilt push -a<br />
</syntaxhighlight><br />
<br />
* Configure<br />
<syntaxhighlight lang=bash><br />
mkdir build <br />
cd build<br />
cmake .. -DCMAKE_INSTALL_PREFIX=/usr<br />
</syntaxhighlight><br />
<br />
* Build<br />
<syntaxhighlight lang=bash><br />
make<br />
</syntaxhighlight><br />
<br />
* Install<br />
<syntaxhighlight lang=bash><br />
sudo make install<br />
</syntaxhighlight><br />
<br />
=== SRTP ===<br />
Can be installed with:<br />
<syntaxhighlight lang=bash><br />
sudo apt install libsrtp2-dev<br />
</syntaxhighlight><br />
<br />
=== usrsctp ===<br />
Fox x86 can be installed with:<br />
<syntaxhighlight lang=bash><br />
sudo apt install libusrsctp-dev<br />
</syntaxhighlight><br />
<br />
But for jetson nano it is not found in the repositories, so you need to build it from code as follows:<br />
<syntaxhighlight lang=bash><br />
git clone https://github.com/sctplab/usrsctp.git<br />
cd usrsctp/<br />
git checkout 1ade45cbadfd19298d2c47dc538962d4425ad2dd<br />
mkdir build<br />
cd build/<br />
cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_C_FLAGS=-fPIC -Dsctp_werror=0<br />
make<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
=== LibkvsCommonLws ===<br />
This is a custom Amazon library part of the project Amazon Kinesis Video Streams C Producer. To build it follow the next instructions:<br />
<br />
* Download the code <br />
<syntaxhighlight lang=bash><br />
git clone https://github.com/awslabs/amazon-kinesis-video-streams-producer-c.git<br />
git checkout c7fce9e06021452ff3c42dc70c8360606b22ad53<br />
</syntaxhighlight><br />
<br />
* Configure<br />
<syntaxhighlight lang=bash><br />
mkdir build<br />
cd build<br />
cmake -DCMAKE_INSTALL_PREFIX=/usr .. -DBUILD_DEPENDENCIES=OFF -DBUILD_COMMON_LWS=ON -DBUILD_COMMON_CURL=OFF<br />
</syntaxhighlight><br />
<br />
* Build<br />
<syntaxhighlight lang=bash><br />
make<br />
</syntaxhighlight><br />
<br />
* Install<br />
<syntaxhighlight lang=bash><br />
sudo make install<br />
</syntaxhighlight><br />
<br />
=== Amazon Kinesis Video Streams C WebRTC SDK ===<br />
<br />
The code is stored in a github repository and you can download it using the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
git clone https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c.git<br />
</syntaxhighlight><br />
<br />
This instructions were run for version v.1.7.4<br />
To build the code run the following commands:<br />
<br />
<syntaxhighlight lang=bash><br />
mkdir build<br />
cd build<br />
cmake -DCMAKE_INSTALL_PREFIX=/usr .. -DBUILD_DEPENDENCIES=OFF<br />
make<br />
make install<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Building_GstKinesisWebRTC/Building_and_Installing&diff=43897GstKinesisWebRTC/Building GstKinesisWebRTC/Building and Installing2022-10-10T12:47:34Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC: Building and Installing|noerror}}<br />
<br />
You will need meson, if you don’t have it already you can use the following commands to install it.<br />
<br />
Download meson via pip installer:<br />
<br />
<syntaxhighlight lang='bash'><br />
sudo apt install python3-pip ninja-build<br />
pip3 install --user meson<br />
#After installation you may need to export meson to the PATH variable<br />
export PATH="$HOME/.local/bin:$PATH" #Add this to the .bashrc file for permanent setup<br />
</syntaxhighlight><br />
<br />
Please refer Getting the code page and RidgeRun will provide you the full source version of the plugin path once you place the order.<br />
Check out the latest tag and run the commands mentioned below:<br />
<br />
<syntaxhighlight lang='bash'><br />
cd gst-kinesis-webrtc<br />
meson build --prefix=/usr<br />
ninja -C build<br />
sudo ninja -C build install<br />
</syntaxhighlight><br />
<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GstKinesisWebRTC/Head&diff=43896Template:GstKinesisWebRTC/Head2022-10-10T12:46:19Z<p>Mmontero: </p>
<hr />
<div>{{UnderConstruction}}<br />
<br />
<includeonly><br />
{{#seo:<br />
|title={{{title|GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin - {{SUBPAGENAME}}}}}<br />
|titlemode=replace<br />
|keywords=GStreamer, NVIDIA, Jetson, TX1, TX2, Jetson AGX Xavier, Xavier, AI, Deep Learning, Jetson, TX1, TX2, Jetson TX1, Jetson TX2, Jetson Xavier, NVIDIA Jetson Xavier, NVIDIA Jetson Orin, Jetson Orin, Orin, NVIDIA Orin, NVIDIA Jetson AGX Orin, Jetson AGX Orin, Deep Learning, i.MX8, I.MX8, iMX8, NXP, i.MX8 Deep Learning, i.MX 8M, i.MX 8M Plus, i.MX Machine Learning, i.MX6, i.MX, iMX, GstKinesisWebRTC, WebRTC, Amazon, Kinesis, Amazon Kinesis, GStreamer WebRTC Plugin, WebRTC Plugin,{{{keywords|}}}<br />
|description={{{description|This Wiki guide explains more in detail about the GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin.}}}<br />
}}<br />
</includeonly><br />
<br />
{| width=100% cellspacing=0 cellpadding=2 " class="noprint"<br />
| width=33% bgcolor=#86b3d1 | {{#if:{{{previous|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{previous}}}|Previous: {{{previous}}}]]|&nbsp;}}<br />
| width=33% bgcolor=#86b3d1 align=center | [[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin|Index]]<br />
| width=33% bgcolor=#86b3d1 align=right | {{#if:{{{next|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{next}}}|Next: {{{next}}}]]|&nbsp;}}<br />
|}<br />
<br />
{{DISPLAYTITLE:{{{title|GstKinesisWebRTC: {{#replace:{{#titleparts:{{FULLPAGENAME}}||2}}|/|&#x20;-&#x20;}}}}}}}<br />
{{GstKinesisWebRTC/TOC}}<br />
<br />
<noinclude><br />
[[Category:GstKinesisWebRTC Templates]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GstKinesisWebRTC/Head&diff=43895Template:GstKinesisWebRTC/Head2022-10-10T12:45:56Z<p>Mmontero: </p>
<hr />
<div>{{UnderConstruction}}<br />
<br />
<includeonly><br />
{{#seo:<br />
|title={{{title|GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin - {{SUBPAGENAME}}}}}<br />
|titlemode=replace<br />
|keywords=GStreamer, NVIDIA, Jetson, TX1, TX2, Jetson AGX Xavier, Xavier, AI, Deep Learning, Jetson, TX1, TX2, Jetson TX1, Jetson TX2, Jetson Xavier, NVIDIA Jetson Xavier, NVIDIA Jetson Orin, Jetson Orin, Orin, NVIDIA Orin, NVIDIA Jetson AGX Orin, Jetson AGX Orin, Deep Learning, i.MX8, I.MX8, iMX8, NXP, i.MX8 Deep Learning, i.MX 8M, i.MX 8M Plus, i.MX Machine Learning, i.MX6, i.MX, iMX, GstKinesisWebRTC, WebRTC, Amazon, Kinesis, Amazon Kinesis, GStreamer WebRTC Plugin, WebRTC Plugin,{{{keywords|}}}<br />
|description={{{description|This Wiki guide explains more in detail about the GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin.}}}<br />
}}<br />
</includeonly><br />
<br />
{| width=100% cellspacing=0 cellpadding=2 " class="noprint"<br />
| width=33% bgcolor=#86b3d1 | {{#if:{{{previous|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{previous}}}|Previous: {{{previous}}}]]|&nbsp;}}<br />
| width=33% bgcolor=#86b3d1 align=center | [[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin|Index]]<br />
| width=33% bgcolor=#86b3d1 align=right | {{#if:{{{next|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{next}}}|Next: {{{next}}}]]|&nbsp;}}<br />
|}<br />
<br />
{{DISPLAYTITLE:{{{title|GstKinesisWebRTC: {{#replace:{{#titleparts:{{FULLPAGENAME}}||2}}|/|&#x20;}}}}}}}<br />
{{GstKinesisWebRTC/TOC}}<br />
<br />
<noinclude><br />
[[Category:GstKinesisWebRTC Templates]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=Template:GstKinesisWebRTC/Head&diff=43894Template:GstKinesisWebRTC/Head2022-10-10T12:44:15Z<p>Mmontero: </p>
<hr />
<div>{{UnderConstruction}}<br />
<br />
<includeonly><br />
{{#seo:<br />
|title={{{title|GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin - {{SUBPAGENAME}}}}}<br />
|titlemode=replace<br />
|keywords=GStreamer, NVIDIA, Jetson, TX1, TX2, Jetson AGX Xavier, Xavier, AI, Deep Learning, Jetson, TX1, TX2, Jetson TX1, Jetson TX2, Jetson Xavier, NVIDIA Jetson Xavier, NVIDIA Jetson Orin, Jetson Orin, Orin, NVIDIA Orin, NVIDIA Jetson AGX Orin, Jetson AGX Orin, Deep Learning, i.MX8, I.MX8, iMX8, NXP, i.MX8 Deep Learning, i.MX 8M, i.MX 8M Plus, i.MX Machine Learning, i.MX6, i.MX, iMX, GstKinesisWebRTC, WebRTC, Amazon, Kinesis, Amazon Kinesis, GStreamer WebRTC Plugin, WebRTC Plugin,{{{keywords|}}}<br />
|description={{{description|This Wiki guide explains more in detail about the GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin.}}}<br />
}}<br />
</includeonly><br />
<br />
{| width=100% cellspacing=0 cellpadding=2 " class="noprint"<br />
| width=33% bgcolor=#86b3d1 | {{#if:{{{previous|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{previous}}}|Previous: {{{previous}}}]]|&nbsp;}}<br />
| width=33% bgcolor=#86b3d1 align=center | [[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin|Index]]<br />
| width=33% bgcolor=#86b3d1 align=right | {{#if:{{{next|}}}|[[GstKinesisWebRTC A GStreamer Amazon Kinesis WebRTC Plugin/{{{next}}}|Next: {{{next}}}]]|&nbsp;}}<br />
|}<br />
<br />
{{DISPLAYTITLE:{{{title|GstKinesisWebRTC: {{#replace:{{#titleparts:{{FULLPAGENAME}}||2}}|/|&#x20;-&#x20;}}}}}}}<br />
{{GstKinesisWebRTC/TOC}}<br />
<br />
<noinclude><br />
[[Category:GstKinesisWebRTC Templates]]<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Building_GstKinesisWebRTC&diff=43893GstKinesisWebRTC/Building GstKinesisWebRTC2022-10-10T12:42:18Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:Building GstKinesisWebRTC|noerror}}<br />
<br />
This section includes the instructions to build the plugin and its dependencies.<br />
<br />
*[[GstKinesisWebRTC/Building GstKinesisWebRTC/Dependencies|Dependencies]] provides the list of the plugin's dependencies and how to install them.<br />
*[[GstKinesisWebRTC/Building GstKinesisWebRTC/Building and Installing|Building and Installing]] includes the steps to build and install the plugin in your system.<br />
*[[GstKinesisWebRTC/Building GstKinesisWebRTC/Verify|Verify]] shows you how to verify that the plugin was installed correctly.<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Description/Signals&diff=43892GstKinesisWebRTC/Description/Signals2022-10-10T12:41:50Z<p>Mmontero: </p>
<hr />
<div><br />
<noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC Signals|noerror}}<br />
<br />
*'''peer-connected''': triggered when a new peer connection is established.<br />
<pre><br />
void peer_connected (GstElement* object, gchar* peer_id, gpointer user_data);<br />
</pre><br />
<br />
<br />
*'''peer-disconnected''': triggered when a peer disconnects.<br />
<pre><br />
void user_function (GstElement* object, gchar * peer_id, gpointer user_data);<br />
</pre><br />
<br />
<br />
*'''access-key''': triggered during the element execution start to get the AWS access key ID, if not provided for the other means. You must return the string containing your access key.<br />
<pre><br />
gchararray user_function (GstElement* object, gpointer user_data);<br />
</pre><br />
<br />
<br />
*'''secret-key''': triggered during the element execution start to get the AWS secret key, if not provided for the other means. You must return the string containing your secret key.<br />
<pre><br />
gchararray user_function (GstElement* object, gpointer user_data);<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Description/Properties&diff=43891GstKinesisWebRTC/Description/Properties2022-10-10T12:41:16Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC Properties|noerror}}<br />
<br />
<br />
*'''ca-certificate''': string with the path to the SSL CA certificate<br />
<br />
<br />
*'''channel''': string with the name of the signaling channel, this should be set before starting the playing, if the channel is not set during the READY to PAUSED state change the element will fail.<br />
<br />
<br />
*'''region''': string with the AWS region in which the signaling channel will be opened. If set after the READY to PAUSED transition, it will be ignored and default us-west-2 will be used.<br />
<br />
<br />
*'''access-key''': string with the AWS access key ID. This is one of the options to set the AWS credentials, to be used it must be set before the READY to PAUSED transition.<br />
<br />
<br />
*'''secret-key''': string with the AWS secret key. This is one of the options to set the AWS credentials, to be used it must be set before the READY to PAUSED transition.<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmonterohttps://developer.ridgerun.com/wiki/index.php?title=GstKinesisWebRTC/Description/Properties&diff=43890GstKinesisWebRTC/Description/Properties2022-10-10T12:40:33Z<p>Mmontero: </p>
<hr />
<div><noinclude><br />
{{GstKinesisWebRTC/Head|previous=|next=|}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:GstKinesisWebRTC Properties|noerror}}<br />
<br />
<br />
*'''ca-certificate''': string with the path to the SSL CA certificate<br />
*'''channel''': string with the name of the signaling channel, this should be set before starting the playing, if the channel is not set during the READY to PAUSED state change the element will fail.<br />
*'''region''': string with the AWS region in which the signaling channel will be opened. If set after the READY to PAUSED transition, it will be ignored and default us-west-2 will be used.<br />
*'''access-key''': string with the AWS access key ID. This is one of the options to set the AWS credentials, to be used it must be set before the READY to PAUSED transition.<br />
*'''secret-key''': string with the AWS secret key. This is one of the options to set the AWS credentials, to be used it must be set before the READY to PAUSED transition.<br />
<br />
<noinclude><br />
{{GstKinesisWebRTC/Foot||}}<br />
</noinclude></div>Mmontero