https://developer.ridgerun.com/wiki/api.php?action=feedcontributions&user=Spalli&feedformat=atomRidgeRun Developer Wiki - User contributions [en]2024-03-28T17:49:36ZUser contributionsMediaWiki 1.40.0https://developer.ridgerun.com/wiki/index.php?title=NVIDIA_GTC_2024_360_VR_Demo/GTC_Demo_Description&diff=53771NVIDIA GTC 2024 360 VR Demo/GTC Demo Description2024-03-28T16:42:05Z<p>Spalli: /* Components */</p>
<hr />
<div><noinclude><br />
{{NVIDIA GTC 2024 360 VR Demo/Head|previous=|next=Partnerships|metakeywords=}}<br />
</noinclude><br />
<br />
{{DISPLAYTITLE:NVIDIA GTC 2024 360 VR demo description|noerror}}<br />
<br />
Understanding this demo for GTC 2024 requires familiarity with the key components enabling 360-degree video control via a web-based graphical user interface (GUI) and viewing through a virtual reality (VR) headset.<br />
<br />
== Components ==<br />
<br />
This demo is composed of the next elements shown in the chart. <br />
<br><br />
<br><br />
[[File:GTC STRUCTURE.png.png|800x447px|thumb|center|Structure of the 360 video demo]]<br />
<br><br />
List of components:<br />
* 2 Fish Eye Lenses with 185 degrees or more for field of view. <br />
* 2 Camera Modules from Framos<br />
* Carrier Board from Connect Tech<br />
* Jetson AGX Orin Module.<br />
* Docker Container.<br />
* WebPage.<br />
* VR Headset.<br />
* RidgeRun Software products<br />
* Python Demo<br />
<br />
=== Fish Eye Lenses ===<br />
<br />
For the Fish Eye Lenses, the demo uses the 185° HFOV Lens, with an FOV of 185-degree. <br />
[https://www.framos.com/en/products/fsmgo-with-imx676-sensor-and-185-hfov-lens-27357 More information]<br />
<br />
=== Camera Modules ===<br />
<br />
Framos developed the camera modules FSM:GO with IMX676 Sensor, providing a resolution of 3552 x 3556 at up to 64 FPS. <br />
<br />
[https://www.framos.com/en/products/fsmgo-with-imx676-sensor-27363 More information]<br />
<br />
=== Carrier Board ===<br />
<br />
Connect Tech developed the Anvil Carrier Board, prepared to endure the demands of the most computationally intensive AI applications by using the cutting-edge NVIDIA® Jetson AGX Orin™ and AGX Orin™ Industrial supercomputers.<br />
<br />
[https://connecttech.com/product/anvil-embedded-system-with-nvidia-jetson-agx-orin/ More Information]<br />
<br />
=== Jetson AGX Orin Module ===<br />
<br />
The NVIDIA® Jetson Orin™ module gives up to 275 trillion operations per second (TOPS) for multiple AI inference pipelines with 64GB of memory, giving more than 8X the performance of Jetson AGX Xavier™.<br />
<br />
[https://www.arrow.com/en/products/900-13701-0050-000/nvidia More information]<br />
<br />
=== Driver for GMSL Interface ===<br />
<br />
GMSL (Gigabit Multimedia Serial Link), operating as an asymmetric full-duplex SerDes technology, adeptly manages the simultaneous transmission of high-rate downlink data and lower-rate uplink data. It facilitates the conveyance of power, bidirectional control data, Ethernet, audio, and multiple unidirectional video streams over a single coaxial or shielded twisted pair cable. Additionally, GMSL supports both unprotected and encrypted video transport through HDCP.<br />
<br />
=== Docker Container === <br />
<br />
RidgeRun has crafted this container to encompass the requisite software dependencies, products, and applications necessary for executing the demo. This demo uses GStreamer with Python to create and control the pipelines via the Web Page. You can have more information in the RidgeRun Wiki Guide on [[Docker_Images_with_Demos_and_Products_Evaluation_Versions | Docker Images with Demos and Products Evaluation Versions]]. <br />
<br />
For this demo, the container includes the repository where it runs the demo with the required dependencies as you can see in the next image. <br />
<br />
[[File:Docker dependencies.png|641x591px|thumb|center|Docker dependencies required for the demo.]]<br />
<br />
=== Generative AI ===<br />
<br />
Generative AI Agent developed by [https://www.jetson-ai-lab.com/tutorial_llava.html NVIDIA]. This agent provides information regarding the actual snapshot taken by the demo.<br />
<br />
'''Important''':<br />
* In the AGX Orin used in the demo, an NVME was installed in order to have more space to play with the Jetson AI lab containers.<br />
* The Gen AI agent must run inside the container provided by NVIDIA, the instructions below show how to run everything<br />
* The demo repository must be placed inside the mount point of the container. In the AGX Orin for the demo is mounted in ''/orin_ssd/jetson_containers/data/''<br />
* Instructions to download jetson containers can be found [https://www.jetson-ai-lab.com/tutorial_llava.html here].<br />
* Github repository with Gen AI reference samples [https://github.com/dusty-nv/jetson-containers here]. The Llava reference code we are using is [https://github.com/dusty-nv/jetson-containers/blob/master/packages/llm/llava/benchmark.py this one].<br />
<br />
Instructions to run:<br />
* Make sure your NVME is mounted in '''/orin_ssd'''<br />
* After the board boots up, it is important to change the docker images directory as follows (In the GTC AGX Orin ''/orin_ssh'' directory there is an script to run these commands called '''change_docker_mount.sh'''):<br />
<pre><br />
sudo mount --rbind /orin_ssd/docker /var/lib/docker<br />
sudo systemctl stop docker<br />
sudo systemctl start docker<br />
</pre><br />
* Go to the NVME mount point:<br />
<pre><br />
cd /orin_ssd<br />
</pre><br />
<br />
=== WebPage ===<br />
<br />
This web page handles the features to control the demo and a thumbnail to see a snapshot of the video sent to the VR Headset. The web page can be executed in any browser. <br />
<br />
=== VR Headset ===<br />
<br />
The MetaQuest 2 provides 360° videos with visuals at a resolution of 20 pixels per degree and are showcased on a Fast-Switch LCD Display, boasting a total span of 1832 x 1920 pixels per eye. <br />
<br />
[https://www.meta.com/quest/products/quest-2/ More information]<br />
<br />
<noinclude><br />
{{NVIDIA GTC 2024 360 VR Demo/Foot||Partnerships}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Contact_Us&diff=53760Spherical Video PTZ/Contact Us2024-03-27T18:58:08Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Performance/Jetson AGX Xavier|next=|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
<br />
{|<br />
{{WikiGuideConatctUs}}<br />
|}<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Performance/Jetson AGX Xavier|}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Performance/Jetson_AGX_Xavier&diff=53759Spherical Video PTZ/Performance/Jetson AGX Xavier2024-03-27T18:57:02Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=Contact_Us|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
== Benchmark environment ==<br />
<br />
The measurements are taken considering the following criteria:<br />
<br />
* Average behaviour: measurements considering typical image processing pipelines.<br />
<br />
== Benchmarking ==<br />
<br />
'''Instruments:'''<br />
* ''GPU'': Jtop<br />
* ''CPU'': RidgeRun Profiler<br />
* ''RAM'': RidgeRun Profiler<br />
* ''Framerate'': GstShark<br />
<br />
'''Pipelines:'''<br />
<br />
''Processing time with transformations (2000x1000):''<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true num-buffers=200 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! rrpanoramaptz ! "video/x-raw,width=2000,height=1000" ! fakesink<br />
</syntaxhighlight><br />
[[File:1-perf.png|thumbnail|center|640px|Element performance]]<br />
<br />
''Processing time without transformations (2000x1000):''<br />
<syntaxhighlight lang=bash><br />
GST_DEBUG="GST_TRACER:7" GST_TRACERS="proctime" gst-launch-1.0 videotestsrc is-live=true num-buffers=200 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! rrpanoramaptz zoom=2 pan=20 tilt=90 ! "video/x-raw,width=2000,height=1000" ! fakesink<br />
</syntaxhighlight><br />
[[File:1_5-perf.png|thumbnail|center|640px|Element performance]]<br />
<br />
''Framerate (2000x1000):''<br />
<syntaxhighlight lang=bash><br />
GST_DEBUG="GST_TRACER:7" GST_TRACERS="framerate" gst-launch-1.0 videotestsrc is-live=true num-buffers=300 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! rrpanoramaptz ! "video/x-raw,width=2000,height=1000,framerate=60/1" ! fakesink<br />
</syntaxhighlight><br />
[[File:5-perf.png|thumbnail|center|640px|Element performance]]<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||Contact_Us}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Performance&diff=53758Spherical Video PTZ/Performance2024-03-27T18:54:51Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Examples/GstD|next=Performance/Jetson AGX Xavier|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
==Examples==<br />
This wiki provides performance information for the Spherical Video PTZ product on different platforms:<br />
<br />
*;[[Spherical_Video_PTZ/Performance/Jetson_AGX_Xavier|Jetson AGX Xavier]]<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Examples/GstD|Performance/Jetson AGX Xavier}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/GstD&diff=53757Spherical Video PTZ/Examples/GstD2024-03-27T18:53:15Z<p>Spalli: </p>
<hr />
<div><br />
<noinclude><br />
{{Spherical Video PTZ/Head|previous=Examples/Gst-launch|next=Performance|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
==Using CPU==<br />
* With this pipeline, you can take an input image, dynamically adjust PTZ properties, and save the output to a file sink.<br />
Ensure that the gstd daemon is running in background:<br />
<pre><br />
pipeline_create p1 filesrc location=sample.jpg ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA ,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts<br />
</pre><br />
<br />
* This script enables you to process an equirectangular image, utilizing gstd to perform a horizontal panning effect and encapsulate this into a 15-second video. To use this script, follow these steps:<br />
# Copy the provided code into a file named sample.sh.<br />
# Make sure the <code>gstd</code> daemon is running in the background on your system. This is necessary for the script to function correctly as it relies on gstd to manage the GStreamer pipeline.<br />
# Make the script executable con <code>chmod +x sample.sh</code>.<br />
# Execute the script by running <code>./sample.sh <path_to_your_image.jpg></code>.<br />
<br />
<pre><br />
./sample.sh sample.jpg <br />
</pre><br />
<br />
<syntaxhighlight lang="bash" line><br />
#!/bin/bash<br />
<br />
if [ "$#" -ne 1 ]; then<br />
echo "Usage: $0 <path_to_image>"<br />
exit 1<br />
fi<br />
<br />
image_path="$1"<br />
counter=0<br />
loops=0<br />
duration=15 # Video duration in seconds<br />
frame_interval=0.02 # Time between frames in seconds<br />
total_frames=$(echo "scale=0; $duration / $frame_interval" | bc)<br />
<br />
gst-client pipeline_create p1 "filesrc location=${image_path} ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts"<br />
<br />
gst-client pipeline_play p1<br />
<br />
while [ $loops -lt $total_frames ]; do<br />
gst-client --quiet element_set p1 ptz pan ${counter}<br />
((counter++))<br />
if [ $counter -ge 360 ]; then<br />
counter=0<br />
fi<br />
((loops++))<br />
sleep $frame_interval<br />
done<br />
<br />
gst-client pipeline_stop p1<br />
gst-client pipeline_delete p1<br />
</syntaxhighlight><br />
<br />
You can play the video using:<br />
<pre><br />
gst-launch-1.0 filesrc location=./sample.ts ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Examples/Gst-launch|Performance}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/Gst-launch&diff=53756Spherical Video PTZ/Examples/Gst-launch2024-03-27T18:51:40Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Examples|next=Examples/GstD|metakeywords=}}<br />
</noinclude><br />
<br />
==Setting up the environment==<br />
<br />
'''1.''' Download the sample image, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
wget "https://unsplash.com/photos/PYpkPbBCNFw/download?ixid=M3wxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNzExNTE1MTQxfA&force=true" -O example_image.jpg<br />
</syntaxhighlight><br />
<br />
'''2.''' export the following variables.For these examples, we'll use the following settings, but feel free to adjust them according to your needs.<br />
<pre><br />
# OUTPUT_PATH: The file path where the processed video will be saved.<br />
export OUTPUT_PATH=example_image.ts<br />
# PORT: The port number used for UDP streaming.<br />
export PORT=1234<br />
# HOST_IP: The IP address of the host machine for receiving the stream.<br />
export HOST_IP=10.42.0.1<br />
</pre><br />
<br />
==Using System Memory==<br />
===Display===<br />
* '''Video Test Pattern Display:''' Generates a test pattern, applies PTZ (Pan, Tilt, Zoom) transformations, and adjusts the output size for display.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz zoom=2.1 ! "video/x-raw,width=1920,height=1080" ! queue ! videoconvert ! autovideosink<br />
</pre><br />
<br />
* '''Image Zoom and Display:''' Takes an equirectangular image, applies a zoom effect, and displays the result. Use this to showcase specific features of panoramic images.<br />
<pre><br />
gst-launch-1.0 filesrc location=$SAMPLES/example_image.jpg ! jpegdec ! imagefreeze ! videoconvert ! video/x-raw,format=RGBA ! rrpanoramaptz zoom=1.5 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
===Recording===<br />
* '''Record PTZ-Transformed Video:''' Captures an input image, applies PTZ transformations, and encodes the output into a video file. This pipeline is useful for creating panoramic videos with dynamic perspectives.<br />
<pre><br />
gst-launch-1.0 filesrc location=$SAMPLES/example_image.jpg ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA ,width=1920,height=1080 ! rrpanoramaptz zoom=1.5 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=$OUTPUT_PATH<br />
</pre><br />
To decode and view the video, use:<br />
<pre><br />
gst-launch-1.0 filesrc location=$OUTPUT_PATH ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
===Streaming===<br />
* '''Live Streaming with PTZ Controls:''' Streams a live video feed with PTZ controls, encoding the content in H.264 format for UDP transmission. Example of use for for real-time broadcasting of panoramic content.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=18 ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=$PORT host=$HOST_IP<br />
</pre><br />
<br />
Client setup for receiving the stream:<br />
<pre><br />
gst-launch-1.0 udpsrc port=$PORT address=$HOST_IP ! queue ! tsdemux ! h264parse ! queue ! decodebin ! videoconvert ! fpsdisplaysink<br />
</pre><br />
<br />
==Using NVMM (GPU Acceleration)==<br />
<br />
===Recording===<br />
* '''GPU-Accelerated Video Recording:''' Encondes the video in H.264 format, and saving it to a file. <br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz zoom=1.5 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=$OUTPUT_PATH<br />
</pre><br />
To decode and view the video, use:<br />
<pre><br />
gst-launch-1.0 filesrc location=$OUTPUT_PATH ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
===Streaming===<br />
* '''GPU-Accelerated Video Streaming:''' Streams a live video feed with PTZ controls, encoding the content in H.264 format for UDP transmission.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=$PORT host=$HOST_IP<br />
</pre><br />
<br />
Client setup for receiving the stream:<br />
<pre><br />
gst-launch-1.0 udpsrc port=$PORT address=$HOST_IP ! queue ! tsdemux ! h264parse ! queue ! decodebin ! videoconvert ! fpsdisplaysink<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Examples|Examples/GstD}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples&diff=53755Spherical Video PTZ/Examples2024-03-27T18:49:38Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=User Guide/Quick Start Guide|next=Examples/Gst-launch|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
==Examples==<br />
This wiki provides serves as a guide on how to evaluate the Spherical Video PTZ applications.<br />
<br />
*;[[Spherical_Video_PTZ/Examples/Gst-launch|Gst-launch examples]]<br />
<br />
*;[[Spherical_Video_PTZ/Examples/GstD|GstD examples]]<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|User Guide/Quick Start Guide|Examples/Gst-launch}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Quick_Start_Guide&diff=53754Spherical Video PTZ/User Guide/Quick Start Guide2024-03-27T18:26:59Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=User Guide/Building and Installation|next=Examples|metakeywords=}}<br />
</noinclude><br />
<br />
==Libpanorama==<br />
<br />
This wiki introduces a basic use of Spherical Video PTZ for converting equirectangular images to rectilinear format with an engine. It includes a simple example and instructions on how to use the engine for different needs. The engine makes it easy to change panoramic images into a straight view, useful for many projects.<br />
<br />
=== Minimal Application ===<br />
<br />
After [[Spherical Video PTZ/User Guide/Building and Installation|Building and Installation]], follow these steps:<br />
<br />
'''1.''' Download the sample image, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
wget "https://unsplash.com/photos/PYpkPbBCNFw/download?ixid=M3wxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNzExNTE1MTQxfA&force=true" -O example_image.jpg<br />
</syntaxhighlight><br />
<br />
'''2.''' This example demonstrates the use of the Spherical Video PTZ engine to convert equirectangular images into rectilinear format. This command processes example_image.jpg, converting it from an equirectangular format to a rectilinear view. But you can use any other reference image as long as it is equirectangular. Run the example as:<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH/libpanorama<br />
./builddir/examples/equirectangular_to_rectilinear $SAMPLES/example_image.jpg <br />
</syntaxhighlight><br />
<br />
'''3.''' For this example you can use the interactive controls with the Spherical Video PTZ (Pan-Tilt-Zoom) for dynamic exploration of panoramic images. Hit the specified keys when the example is running:<br />
* Zoom In/Out: Adjust the zoom level to get a closer view or a wider perspective of the image.<br />
** In: <code>i</code><br />
** Out: <code>o</code><br />
* Pan Left/Right: Rotate the view horizontally to explore the left or right sides of the panoramic image.<br />
** Left: <code>4</code><br />
** Right: <code>6</code><br />
* Tilt Up/Down: Adjust the vertical angle of the camera to look up or down within the panoramic image.<br />
** Up: <code>8</code><br />
** Down: <code>2</code><br />
<br />
'''4.''' press the <code>Esc</code> key to exit the program.<br />
<br />
=== Spherical Video PTZ Engine ===<br />
<br />
The Video PTZ Engine simplifies the use of PTZ controls, making it easier to integrate into your code. To get started, add the following ''includes'' and ''namespaces'' to your code:<br />
<br />
<syntaxhighlight lang=cpp><br />
#include <iostream><br />
#include <lp/allocators/cudaimage.hpp><br />
#include <lp/engines/equirectangular_to_rectilinear.hpp><br />
#include <lp/image.hpp><br />
#include <lp/io/opencv.hpp><br />
#include <lp/rgba.hpp><br />
<br />
using namespace lp;<br />
using namespace lp::io;<br />
</syntaxhighlight><br />
<br />
<br />
Once this have been done, create the engine. To do so, instantiate the class as demonstrated:<br />
<syntaxhighlight lang=cpp><br />
lp::engines::EquirectangularToRectilinear<RGBA<uint8_t>> engine;<br />
</syntaxhighlight><br />
<br />
<br />
Next, configure the parameters to be manipulated using the properties of the Spherical Video PTZ. Instantiate them as showed in the code below. The initial parameters, represented as <code>{{0.0f, 0.0f}, 2.0f}</code>, control the panning, tilting, and zooming of the image in the format <code>{{pan, tilt}, zoom}</code>. The panning and tilting values can are valid in the range: <code>[-π,π]</code> while the zooming in the range: <code>[0.1, 10]</code>. Additionally, <code>{dst.GetSize()}</code> is used to define the output size in <code>{width, height}</code> format, while <code>{io.GetSize()}</code> is utilized to set the input size, also in <code>{width, height}</code> format.<br />
<br />
<syntaxhighlight lang=cpp><br />
engines::EquirectangularToRectilinearParams params{{<br />
{0.0f, 0.0f},<br />
2.0f,<br />
},<br />
{<br />
dst.GetSize(),<br />
},<br />
{io.GetSize()}};<br />
</syntaxhighlight><br />
<br />
<br />
Then, set the initial parameters with the '''SetParameters''' method:<br />
<br />
<syntaxhighlight lang=cpp><br />
lp::engine.SetParameters(params);<br />
</syntaxhighlight><br />
<br />
<br />
Finally, use the '''Process''' method with the input image as the first parameter. The second parameter will contain the result after applying the equirectangular to rectilinear projection transformation. Please note that the Engine supports both '''CudaImages''' and '''Images''' for processing. However, if you choose to use an Image, the Engine will internally allocate a Cuda buffer and copy the Image content into it, potentially affecting the application's performance. <br />
<br />
<syntaxhighlight lang=cpp><br />
lp::Image<RGBA<uint8_t>> img;<br />
lp::allocators::CudaImage<RGBA<uint8_t>> dst;<br />
<br />
lp::engine.Process(img, dst);<br />
</syntaxhighlight><br />
<br />
<br />
Consider the following pseudo-code snippet as an example of how to use the Engine in a loop:<br />
<br />
<syntaxhighlight lang=cpp line><br />
#include <iostream><br />
#include <lp/engines/equirectangular_to_rectilinear.hpp><br />
#include <lp/image.hpp><br />
#include <lp/io/opencv.hpp><br />
#include <lp/rgba.hpp><br />
<br />
using namespace lp;<br />
using namespace lp::io;<br />
<br />
int main(int argc, char **argv) {<br />
ImageSize size{500, 500}; /* size of the output image */<br />
const size_t rawsize = size.PixelCount();<br />
<br />
OpenCV<RGBA<uint8_t>> io.Open(image_path);<br />
Image<RGBA<uint8_t>> img = io.ReadImage();<br />
Image<RGBA<uint8_t>> dst =<br />
Image(size, std::shared_ptr<RGBA<uint8_t>[]>(new RGBA<uint8_t>[rawsize]));<br />
<br />
engines::EquirectangularToRectilinear<RGBA<uint8_t>> engine;<br />
engines::EquirectangularToRectilinearParams params{{<br />
{0.0f, 0.0f},<br />
2.0f,<br />
},<br />
{<br />
dst.GetSize(),<br />
},<br />
{io.GetSize()}};<br />
<br />
engine.SetParameters(params);<br />
<br />
for (int i = 1; i <= 100; i++) {<br />
/* Process image */<br />
engine.Process(img, dst);<br />
<br />
/* Do anything you want with dst (display, stream, save, ...) */<br />
<br />
/* Update parameters */<br />
params.equirectangular.r -= 0.05; /* zoom out */<br />
params.equirectangular.viewpoint + Point2D{0.05f, 0.00f}; /* tilt right */<br />
params.equirectangular.viewpoint + Point2D{0.00f, 0.05f}; /* pan up */<br />
engine.SetParameters(params);<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
==GstRrPanoramaptz==<br />
<br />
After [[Spherical Video PTZ/User Guide/Building and Installation|Building and Installation]], follow these steps:<br />
<br />
The GstRrPanoramaptz plugin allows for real-time PTZ adjustments on panoramic video feeds, enabling users to explore video scenes in greater detail or from different perspectives.<br />
<br />
===Overview===<br />
====Features====<br />
* '''CUDA-accelerated PTZ transformations:''' Leverage the power of NVIDIA CUDA technology. This acceleration helps with a smooth and high-performance video processing.<br />
* '''Support for RGBA video format.'''<br />
* '''Dynamic parameter adjustments:''' Users can dynamically adjust PTZ parameters such as pan, tilt, and zoom during playback, providing a versatile and interactive video experience.<br />
<br />
====Properties====<br />
The GstRrPanoramaptz plugin introduces three primary properties for real-time video manipulation:<br />
<br />
* Pan (Horizontal Rotation): Adjusts the video feed's horizontal orientation. Pan adjustments allow viewers to rotate the video around its vertical axis, simulating a left or right looking direction.<br />
** Syntax: <code>pan=<value></code>.<br />
** Range: <code>-360 to 360</code> degrees.<br />
** Default: <code>0</code>.<br />
<br />
* Tilt (Vertical Rotation): This property adjusts the vertical viewing angle of the video feed. It simulates a vertical rotation of the camera view.<br />
** Syntax: <code>tilt=<value></code>.<br />
** Range: <code>-360 to 360</code> degrees.<br />
** Default: <code>0</code>.<br />
<br />
* Zoom: This property adjusts the zoom level of the video feed. It simulates moving the camera closer or further away from the scene.<br />
** Syntax: <code>zoom=<value></code>.<br />
** Range: <code>0.1 to 10</code>.<br />
** Behavior: '''Zoom out''' for <code>zoom < 1</code>, '''Zoom in''' for <code>zoom > 1</code>.<br />
** Default: <code>1</code>.<br />
<br />
====Caps and Formats====<br />
* The plugin accepts and outputs video in the <code>video/x-raw</code> format, utilising the RGBA color space. This support ensures compatibility with a wide range of video processing scenarios.<br />
* Enhanced performance on NVIDIA hardware is achieved through support for both system memory and NVMM (NVIDIA Multi-Media) memory inputs. This flexibility allows users to optimise their video processing pipelines based on the available hardware resources.<br />
<br />
====Basic use example====<br />
This pipeline creates a test video, then applies a 0.5-degree rotation to the right, tilts it upwards by 0.5 degrees, and enhances the view with a zoom level of 2.<br />
<br />
<pre><br />
gst-launch-1.0 videotestsrc ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz pan=0.5 tilt=0.5 zoom=2 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
You should see an output as the one below:<br />
[[File:panoramaptz-example.png|thumbnail|center|840px|Libpanorama example]]<br />
The example uses a standard video, not a panoramic one, causing some distortion, but we'll explore distortion-free examples with equirectangular images soon.<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|User Guide/Building and Installation|Examples}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Building_and_Installation&diff=53753Spherical Video PTZ/User Guide/Building and Installation2024-03-27T18:24:23Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=User Guide|next=User Guide/Quick Start Guide|metakeywords=}}<br />
</noinclude><br />
<br />
==Lipanorama==<br />
<br />
This wiki shows how to build the source code. It assumes you have already purchased a license and received access to the source code. If not, head to [[Birds Eye View/Getting Started/How to Get the Code|How to Get the Code]] for instructions on how to proceed.<br />
<br />
=== Install the Dependencies ===<br />
<br />
Before anything, ensure you have installed the following dependencies:<br />
<br />
* '''Git''': To clone the repository.<br />
* '''Meson''': To configure the project.<br />
* '''Ninja''': To build the project.<br />
* '''JsonCPP dev files''': For the parameter loading.<br />
* '''OpenCV dev files''': For panoramaptz Gstreamer element.<br />
* '''GstCUDA''': For enabling GPU-accelerated video and graphics processing using CUDA in GStreamer pipelines.<br />
* '''GStreamer dev files and plugins''': ''(optional)'' for image loading.<br />
* '''QT5 dev files''': ''(optional)'' for image displaying.<br />
* '''CppUTest dev files''': ''(optional)'' for unit testing.<br />
* '''Doxygen, Graphviz''': ''(optional)'' for documentation generation.<br />
<br />
In Debian-based systems (like Ubuntu) you can run:<br />
<syntaxhighlight line lang=bash><br />
sudo apt update<br />
sudo apt install -y \<br />
libjsoncpp-dev \<br />
libopencv-dev libopencv-core-dev \<br />
libopencv-video-dev libopencv-highgui-dev libopencv-videoio-dev \<br />
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \<br />
gstreamer1.0-plugins-bad gstreamer1.0-plugins-good gstreamer1.0-plugins-base \<br />
gstreamer1.0-libav gstreamer1.0-plugins-ugly \<br />
qtbase5-dev qtmultimedia5-dev libqt5multimedia5-plugins \<br />
git wget unzip libcpputest-dev doxygen graphviz \<br />
python3-pip ninja-build<br />
sudo -H pip3 install meson<br />
</syntaxhighlight><br />
<br />
'''For GstCUDA dependencie go to:<br />
[[GstCUDA]]'''<br />
<br />
=== Set up the environment ===<br />
<syntaxhighlight><br />
export SAMPLES=/path_where_the_example_image_is_downloaded/<br />
export LIBPANORAMA_PATH=/path_where_libpanorama_is_installed/<br />
<br />
</syntaxhighlight><br />
<br />
=== Building the Project ===<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH<br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO_LIBPANORAMA/libpanorama<br />
cd libpanorama<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO_LIBPANORAMA` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text='''If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]'''}}<br />
<br />
<br />
There are some configuration options you can use, in case you want to fine tune your build. These are not necessary and we recommend not using them, unless you have a specific reason to.<br />
<br />
<center><br />
{| class="wikitable"<br />
|+ Advanced configuration options<br />
|-<br />
! Option name !! Possible values !! Description !! Default<br />
|-<br />
| examples || enabled/disabled || Whether to build or not the examples. || enabled<br />
|-<br />
| tests || enabled/disabled || Whether to build or not the tests. || enabled<br />
|-<br />
| docs || enabled/disabled || Whether to build or not the API docs. || enabled<br />
|-<br />
| npp || enabled/disabled || Whether to use CUDA (NPP) acceleration or not. || enabled<br />
|-<br />
| opencv || enabled/disabled || Whether to build or not OpenCV IO classes. || enabled<br />
|-<br />
| gstreamer || enabled/disabled || Whether to build or not GStreamer IO classes. || enabled<br />
|-<br />
| qt || enabled/disabled || Whether to build or not QT IO classes. || enabled<br />
|}<br />
</center><br />
<br />
=== Validating the Build ===<br />
<br />
To ensure the build was successful, run the default example with the provided samples.<br />
<br />
'''1.''' Download the sample image, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
wget "https://unsplash.com/photos/PYpkPbBCNFw/download?ixid=M3wxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNzExNTE1MTQxfA&force=true" -O example_image.jpg<br />
<br />
</syntaxhighlight><br />
<br />
'''2.''' Run the example as:<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH/libpanorama<br />
./builddir/examples/equirectangular_to_rectilinear $SAMPLES/example_image.jpg <br />
</syntaxhighlight><br />
<br />
You should see an output as the one below:<br />
[[File:equirectangular-mountain-libpanorama-example.png|thumbnail|center|640px|Libpanorama example]]<br />
<br />
==GstRrPanoramaptz==<br />
<br />
This section introduces the GstRrPanoramaptz plugin, a component of GStreamer designed to apply Pan-Tilt-Zoom (PTZ) transformations to video panoramas using CUDA. Developed with a focus on high-performance video processing, this plugin supports real-time adjustments of panoramic video feeds, enabling dynamic viewpoint changes through pan, tilt, and zoom operations. Ideal for applications requiring interactive video navigation or automated surveillance, GstRrPanoramaptz extends GStreamer's capabilities with advanced video transformation techniques. Here, you'll find setup instructions, usage examples, and insights on integrating this plugin into your video processing pipeline, offering a comprehensive guide to leveraging its features for enhanced video manipulation.<br />
<br />
=== Set up the environment ===<br />
<syntaxhighlight><br />
export PANORAMA_PTZ_PATH=/path_where_gstrrpanoramaptz_is_installed/<br />
</syntaxhighlight><br />
<br />
===Building the project===<br />
<br />
After the [[Spherical_Video_PTZ/User_Guide/Building_and_Installation|Building and Installation]] of Spherical Video PTZ section follow this steps to build the rrpanoramaptz plugin for Gstreamer.<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $PANORAMA_PTZ_PATH<br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO/gst-rr-panoramaptz<br />
cd gst-rr-panoramaptz<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text='''If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]'''}}<br />
<br />
=== Validating the Build ===<br />
<syntaxhighlight lang=bash line><br />
gst-inspect-1.0 rrpanoramaptz<br />
</syntaxhighlight><br />
Upon successful build validation with <code>gst-inspect-1.0 rrpanoramaptz</code>, the output will detail the plugin's comprehensive configuration, highlighting its purpose, capabilities, and properties. You should see an output as the one below:<br />
<syntaxhighlight lang=bash line><br />
...<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: RGBA<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: RGBA<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
...<br />
</syntaxhighlight><br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|User Guide|User Guide/Quick Start Guide}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide&diff=53752Spherical Video PTZ/User Guide2024-03-27T18:22:40Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Getting the code|next=User Guide/Building and Installation|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
<br />
<br />
This section provides a series of instructions that show how to configure and use the Spherical Video PTZ. In the next wiki pages you can find information about:<br />
<br />
*;[[Spherical_Video_PTZ/User_Guide/Building_and_Installation|Building and Installation]]<br />
<br />
*;[[Spherical_Video_PTZ/User_Guide/Quick_Start_Guide|User guide]]<br />
<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Getting the code|User Guide/Building and Installation}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Projections_Used&diff=53751Spherical Video PTZ/Getting Started/Projections Used2024-03-27T18:21:21Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Getting Started/Spherical Video PTZ|next=Getting the code|metakeywords=}}<br />
</noinclude><br />
<br />
==Overview==<br />
This section provides the theoretical explanations of the Equirectangular and Rectilinear projections, which are used on the Spherical Video PTZ.<br />
<br />
==Equirectangular Projection==<br />
Equirectangular images, also referred to as 360° images, capture a panoramic view from a fixed point where the imaging system is positioned. These images encapsulate a complete 360° perspective, allowing all surrounding information to be displayed within a single flat image. To illustrate this concept, consider visualising the Earth as a sphere and then "unfolding" it along the central meridian (shown by the red lines in the accompanying image). This "unfolding" process transforms the spherical surface into a plane image. Please note, that the resulting plane maintains a unique aspect ratio of 2:1, because of the vertical range covers 180° and the horizontal range covers 360° after the unfolding procedure.<br />
<br />
<gallery widths="300px" heights="200px" mode="packed-hover"><br />
File:Meridians.png|Meridians on earth globe.<br />
File:Map with meridians.png|Meridians on earth map.<br />
</gallery><br />
<br />
==Rectilinear Projection==<br />
<br />
The Rectilinear projection, also referred to as the Gnomonic Projection, is a method used to project the surface of a sphere (or a 360° image) onto a plane. Typically, the plane onto which the surface points are mapped is tangent to the sphere at a single point. This projection is accomplished by using the centre of the sphere as the projection point. It's important to note that the resulting plane does not intersect the center of the sphere. The diagram below provides a visual example of this process:<br />
<br><br />
<br />
<br />
[[File:Rectilinear-projection.png|thumbnail|center|640px|Rectilinear projection: great circle projection example]]<br />
<br />
<br />
The term "rectilinear" in the Rectilinear projection refers to its use of straight lines for the projection. This means that lines that are parallel in the real world remain parallel in the projection. Additionally, it is worth noting that every great circle (which is the largest circle that can be drawn on any given sphere) is transformed into a straight line in the resulting plane during this projection process.<br />
<br><br />
<br />
==Equirectangular to Rectilinear Projection==<br />
Now that both projections' workings are clear, let's delve into the crucial details: why is this transformation necessary? <br />
<br />
<br />
If we round up an Equirectangular image, we can construct a spherical image where the data is accurately positioned on the surface of the sphere. However, if a user chooses to crop the Equirectangular image directly, it would lead to an issue: the resulting image would exhibit irregularities due to the curvature of the sphere. Therefore, projecting this image into a Rectilinear image resolves the problem of perturbations in the desired output.<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Getting Started/Spherical Video PTZ|Getting the code}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Spherical_Video_PTZ&diff=53750Spherical Video PTZ/Getting Started/Spherical Video PTZ2024-03-27T18:19:45Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=Getting Started|next=Getting Started/Projections Used|metakeywords=}}<br />
</noinclude><br />
<br />
==What is the Spherical Video PTZ?==<br />
<br />
This application, developed by [https://www.ridgerun.com/ RidgeRun], allows users to easily pan, tilt, and zoom over an image. It is designed to support to run accelerated on an NVIDIA GPU, ensuring its optimal performance. You can pan and tilt the image from 0° to 360° and zoom-in or zoom-out from 0.1x to 10x. Use the following picture as a guide:<br />
<br />
[[File:Equirectangular-to-rectilinear-ptz.png|thumbnail|center|640px|Pan-tilt-zoom explanation example]]<br />
<br />
==How does it work?==<br />
It utilizes both the Equirectangular and Rectilinear projections to create its output. If you're unfamiliar with these terms, it's recommended to visit the [[Spherical_Video_PTZ/Getting_Started/Projections_Used|Projections]] section beforehand. The process involves taking an Equirectangular Image (a 360° image) as input and converting it into a Rectilinear Image based on the user's interaction with pan, tilt, and zoom properties. Refer to the following diagram for a more detailed explanation.<br />
<br />
<br />
<gallery widths="300px" heights="280px" mode="packed-hover"><br />
File:Equirectangular-mountain.jpg|thumb|Equirectangular input image (taken from [https://pixabay.com/photos/winter-panorama-mountains-snow-2383930/ here]).<br />
File:Picture-on-globe.png|thumb|Desired output section. <br />
File:Mountain-side-view.png|thumb|Rectilinear output image. <br />
</gallery><br />
<br />
==Features==<br />
Spherical Video PTZ uses 360° videos and generates an output projection depending on where the user is located in the image. The user interacts with the video through pan, tilt, and zoom properties to position themselves as desired, creating an effect where the user feels physically present at the capture location. The following subsections show examples of the application for each of the properties:<br />
<br />
===Pan===<br />
[[File:Panning.gif]]<br />
<br />
<br />
===Tilt===<br />
<br />
[[File:Tilting.gif]]<br />
<br />
===Zoom===<br />
<br />
[[File:Zoom.gif]]<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot|Getting Started|Getting Started/Projections Used}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started&diff=53749Spherical Video PTZ/Getting Started2024-03-27T18:18:00Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=Getting Started/Spherical Video PTZ|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
<br />
<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||Getting Started/Spherical Video PTZ}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ&diff=53748Spherical Video PTZ2024-03-27T18:16:28Z<p>Spalli: </p>
<hr />
<div><seo title="Spherical Video PTZ | PTZ | RidgeRun" titlemode="replace" metakeywords="GStreamer, NVIDIA, RidgeRun, Jetson, TX1, TX2, Jetson AGX Xavier, Xavier, AI, Deep Learning, Jetson, TX1, TX2, Jetson TX1, Jetson TX2, Jetson Xavier, NVIDIA Jetson Xavier, NVIDIA Jetson Orin, Jetson Orin, Orin, NVIDIA Orin, NVIDIA Jetson AGX Orin, Jetson AGX Orin, Deep Learning, Spherical Video PTZ, PTZ, Spherical Video" metadescription="This Wiki guide explains more in detail about the Spherical Video PTZ"></seo><br />
<br />
<br><br />
{{UnderConstruction}} <br />
<br><br />
<br />
<noinclude>{{Spherical Video PTZ/Foot||Getting Started}}</noinclude><br />
<noinclude>{{DISPLAYTITLE:Spherical Video PTZ|noerror}}</noinclude><br />
{{Spherical Video PTZ/Main_contents}}<br />
<noinclude>{{Spherical Video PTZ/Foot||Getting Started}}</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling&diff=53747Qualcomm Robotics RB5/Image Processing Software/GPU Profiling2024-03-27T18:07:26Z<p>Spalli: /* Results */</p>
<hr />
<div>{{Qualcomm Robotics RB5/Head|previous=Image_Processing_Software|next=Image_Processing_Software/Fast_CV|keywords=gpu,profiler,opengl}}<br />
</noinclude><br />
__TOC__<br />
{{DISPLAYTITLE:Qualcomm Robotics RB5/RB6 - GPU Profiling|noerror}}<br />
<br />
In this section we will see how to work with the Qualcomm Robotics RB5's GPU. The RB5/RB6 has a Qualcomm Adreno 650 GPU as one of its hardware processing units. We will see how to measure it usage percentage and how to run some examples in the GPU. For the latter, we will use [https://www.opengl.org/ OpenGL] with GStreamer and two compiled examples. OpenGL is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering <ref name="OpenGL">OpenGL Official Page. Retrieved March 28, 2023, from [https://en.wikipedia.org/wiki/OpenGL]</ref>.<br />
<br><br />
<br />
== Profile GPU ==<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=The SnapDragon profiler tool from Qualcomm only works for Android systems. It does not give GPU information from the RB5/RB6.<br />
|style=width:unset;<br />
}}<br />
<br />
Profiling the GPU means we will see how much of the GPU is being used by our applications. This allows for a better analysis of how our applications is working. The method we are using works for RB5/RB6 boards flashed with Ubuntu Linux.<br />
<br />
To measure the GPU usage percentage, we are going to check a node from the system. The node is in the following direction: <code>/sys/class/kgsl/kgsl-3d0/gpu_busy_percentage</code>. To check the value of the GPU, you can use the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
If you want to measure constantly the GPU, you can use the <code>watch</code> command, like the following:<br />
<syntaxhighlight lang=bash><br />
watch -n 1 cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
Where the value is getting updated every second. Now, lets try some examples and measure the GPU!<br />
<br><br />
<br />
== GStreamer pipeline ==<br />
<br />
The first example we are using is a GStreamer pipeline. To run it in the GPU, we are using OpenGL plugins. The OpenGL plugins are from the Plugins base of GStreamer, so you only need to install GStreamer. The RB5/RB6 board flashed with a LU image from Thundercomm, already comes with GStreamer and the OpenGL plugins preinstalled. The pipeline we are using is the next one:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true ! "video/x-raw,width=1920,height=1080,framerate=30/1" ! videoconvert ! glupload ! gleffects_fisheye ! glvideoflip video-direction=1 ! gleffects_sepia ! gldownload ! queue ! videoconvert ! waylandsink sync=false<br />
</syntaxhighlight><br />
<br />
In the pipeline above, we are using as our source element <code>videotestsrc</code>, that will generate a video. Then, we define the caps of the video and use videoconvert to transform the format of the data to one that the OpenGL environment supports. To enter the OpenGL environment, we need to use the <code>glupload</code> element. Inside this environment, we can later use all of the available OpenGL Plugins, that you can check with the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-inspect-1.0 | grep opengl<br />
</syntaxhighlight><br />
<br />
In our case, we are testing three elements: gleffects_fisheye, glvideoflip, and gleffects_sepia. The first one, applies a fisheye effect to the video, then the glvideoflip element is rotates the video 90 degrees clockwise. Finally, the gleffects_sepia element applies a Sepia Toning effect. We then use the <code>gldownload</code> element to come back to the GStreamer environment, and finally display the output to a monitor with <code>waylandsink</code>. In Figure 1, you can see the expected output.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:Gstreamer_opengl_pipeline.jpg|thumb| center | 400px | Figure 1: GStreamer pipeline output with OpenGL Plugins transforming Videotestsrc.]]<br />
|}<br />
<br><br />
<br />
If we measure the GPU while running the pipeline the usage percentage is of 8%. This shows that out pipeline is in fact running on th GPU.<br />
<br><br />
<br />
== Draw a square ==<br />
In this next example, we are creating and displaying a colored window in our monitor. This example uses EGL and wayland protocol. The example we are using was created by [https://gist.github.com/Miouyouyou Miouyouyou], and you can get the source files from the following [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link] above, where you will find three files: <code>init_window.c</code>, <code>init_window.h</code>, and <code>log.h</code>.<br />
<br><br />
<br />
'''2'''. Now, you need to download the files. You can click the '''Download ZIP''' button, it will download a zip file with the the source code. In your host computer, you can open a terminal and check the downloaded file.<br />
<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/Downloads/OpenGL_draw_square$ ls<br />
ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
</syntaxhighlight><br />
<br />
The downloaded file should have name similar to the above.<br />
<br><br />
<br />
'''3'''. Move the zip file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''4'''. Now, the next steps will be using your Qualcomm Robotics RB5/RB6 board. You can access it with [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. Unzip the file and change the directory name. Use the following commands:<br />
<br />
<syntaxhighlight lang=bash><br />
unzip ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
mv ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f window_example<br />
cd window_example<br />
</syntaxhighlight><br />
<br><br />
<br />
Inside the directory, you should have the following files:<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/work/OpenGL/window_example$ ls<br />
init_window.c init_window.h log.h<br />
</syntaxhighlight><br />
<br><br />
<br />
'''5'''. Compile the program using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_window init_window.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2<br />
</syntaxhighlight> <br />
<br />
In the above command, we are generating a an output executable file named <code>test_window</code> and giving all the needed libraries for compilation.<br />
<br><br />
<br />
'''6'''. Now, we need to define some environment variables needed to display in our monitor. Use the following commands:<br />
<syntaxhighlight lang=bash><br />
export LD_LIBRARY_PATH=/usr/lib:/usr/lib/aarch64-linux-gnu/<br />
export XDG_RUNTIME_DIR=/usr/bin/weston_socket<br />
</syntaxhighlight> <br />
<br><br />
<br />
'''7'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_window<br />
</syntaxhighlight><br />
<br />
You should see a brown window in your monitor, like the one in Figure 2.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_brown_square.jpg|thumb| center | 650px | Figure 2: Brown square created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 6%.<br />
<br><br />
<br />
== Draw a rotating triangle ==<br />
<br />
In this next and final example, we will create a multi-color triangle rotating on its own axis. For this example, we are using the code created by [https://github.com/krh KRH] called <code>simple-egl.c</code>. You can access the code in the following [https://github.com/krh/weston/blob/master/clients/simple-egl.c link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the next [https://github.com/krh/weston/blob/master/clients/simple-egl.c link] to see the source code. Please copy it and paste it in a a new file in your computer. Name this file with the same name: <code>simple-egl.c</code>.<br />
<br><br />
<br />
'''2'''. Move the file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push simple-egl.c /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp simple-egl.c root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''3'''. Now, we need to continue working in the RB5/RB6 board. We will now compile the file using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_triangle simple-egl.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2 -lm -lwayland-cursor<br />
</syntaxhighlight><br />
<br />
In the above command, we are generating a an output executable file named <code>test_triangle</code> and giving all the needed libraries for compilation.<br />
<br />
'''4'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_triangle<br />
</syntaxhighlight><br />
<br />
You should see a multi-color triangle rotating in your monitor, like the one in Figure 3.<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_color_triangle.png|thumb| center | 650px | Figure 3: Multi-color rotating triangle created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 5%.<br />
<br><br />
<br />
== Results ==<br />
<br />
We already saw how to measure the GPU utilisation percentage and three examples running on it. In table 1, we have a summary of each example and the percentage of GPU it is using.<br />
<br><br />
<br><br />
{| class="wikitable" style="margin: auto;"<br />
|+ Table 1: GPU utilization percentage for OpenGL examples..<br />
|-<br />
! Test case<br />
! GPU (%)<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#GStreamer_pipeline|GStreamer pipeline]]<br />
| 8<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_square|Color window]]<br />
| 6<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_rotating_triangle|Multi-color rotating triangle]]<br />
| 5<br />
|}<br />
<br />
==References==<br />
<references/><br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Image_Processing_Software|Image_Processing_Software/Fast_CV}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling&diff=53746Qualcomm Robotics RB5/Image Processing Software/GPU Profiling2024-03-27T18:06:46Z<p>Spalli: /* Results */</p>
<hr />
<div>{{Qualcomm Robotics RB5/Head|previous=Image_Processing_Software|next=Image_Processing_Software/Fast_CV|keywords=gpu,profiler,opengl}}<br />
</noinclude><br />
__TOC__<br />
{{DISPLAYTITLE:Qualcomm Robotics RB5/RB6 - GPU Profiling|noerror}}<br />
<br />
In this section we will see how to work with the Qualcomm Robotics RB5's GPU. The RB5/RB6 has a Qualcomm Adreno 650 GPU as one of its hardware processing units. We will see how to measure it usage percentage and how to run some examples in the GPU. For the latter, we will use [https://www.opengl.org/ OpenGL] with GStreamer and two compiled examples. OpenGL is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering <ref name="OpenGL">OpenGL Official Page. Retrieved March 28, 2023, from [https://en.wikipedia.org/wiki/OpenGL]</ref>.<br />
<br><br />
<br />
== Profile GPU ==<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=The SnapDragon profiler tool from Qualcomm only works for Android systems. It does not give GPU information from the RB5/RB6.<br />
|style=width:unset;<br />
}}<br />
<br />
Profiling the GPU means we will see how much of the GPU is being used by our applications. This allows for a better analysis of how our applications is working. The method we are using works for RB5/RB6 boards flashed with Ubuntu Linux.<br />
<br />
To measure the GPU usage percentage, we are going to check a node from the system. The node is in the following direction: <code>/sys/class/kgsl/kgsl-3d0/gpu_busy_percentage</code>. To check the value of the GPU, you can use the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
If you want to measure constantly the GPU, you can use the <code>watch</code> command, like the following:<br />
<syntaxhighlight lang=bash><br />
watch -n 1 cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
Where the value is getting updated every second. Now, lets try some examples and measure the GPU!<br />
<br><br />
<br />
== GStreamer pipeline ==<br />
<br />
The first example we are using is a GStreamer pipeline. To run it in the GPU, we are using OpenGL plugins. The OpenGL plugins are from the Plugins base of GStreamer, so you only need to install GStreamer. The RB5/RB6 board flashed with a LU image from Thundercomm, already comes with GStreamer and the OpenGL plugins preinstalled. The pipeline we are using is the next one:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true ! "video/x-raw,width=1920,height=1080,framerate=30/1" ! videoconvert ! glupload ! gleffects_fisheye ! glvideoflip video-direction=1 ! gleffects_sepia ! gldownload ! queue ! videoconvert ! waylandsink sync=false<br />
</syntaxhighlight><br />
<br />
In the pipeline above, we are using as our source element <code>videotestsrc</code>, that will generate a video. Then, we define the caps of the video and use videoconvert to transform the format of the data to one that the OpenGL environment supports. To enter the OpenGL environment, we need to use the <code>glupload</code> element. Inside this environment, we can later use all of the available OpenGL Plugins, that you can check with the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-inspect-1.0 | grep opengl<br />
</syntaxhighlight><br />
<br />
In our case, we are testing three elements: gleffects_fisheye, glvideoflip, and gleffects_sepia. The first one, applies a fisheye effect to the video, then the glvideoflip element is rotates the video 90 degrees clockwise. Finally, the gleffects_sepia element applies a Sepia Toning effect. We then use the <code>gldownload</code> element to come back to the GStreamer environment, and finally display the output to a monitor with <code>waylandsink</code>. In Figure 1, you can see the expected output.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:Gstreamer_opengl_pipeline.jpg|thumb| center | 400px | Figure 1: GStreamer pipeline output with OpenGL Plugins transforming Videotestsrc.]]<br />
|}<br />
<br><br />
<br />
If we measure the GPU while running the pipeline the usage percentage is of 8%. This shows that out pipeline is in fact running on th GPU.<br />
<br><br />
<br />
== Draw a square ==<br />
In this next example, we are creating and displaying a colored window in our monitor. This example uses EGL and wayland protocol. The example we are using was created by [https://gist.github.com/Miouyouyou Miouyouyou], and you can get the source files from the following [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link] above, where you will find three files: <code>init_window.c</code>, <code>init_window.h</code>, and <code>log.h</code>.<br />
<br><br />
<br />
'''2'''. Now, you need to download the files. You can click the '''Download ZIP''' button, it will download a zip file with the the source code. In your host computer, you can open a terminal and check the downloaded file.<br />
<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/Downloads/OpenGL_draw_square$ ls<br />
ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
</syntaxhighlight><br />
<br />
The downloaded file should have name similar to the above.<br />
<br><br />
<br />
'''3'''. Move the zip file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''4'''. Now, the next steps will be using your Qualcomm Robotics RB5/RB6 board. You can access it with [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. Unzip the file and change the directory name. Use the following commands:<br />
<br />
<syntaxhighlight lang=bash><br />
unzip ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
mv ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f window_example<br />
cd window_example<br />
</syntaxhighlight><br />
<br><br />
<br />
Inside the directory, you should have the following files:<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/work/OpenGL/window_example$ ls<br />
init_window.c init_window.h log.h<br />
</syntaxhighlight><br />
<br><br />
<br />
'''5'''. Compile the program using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_window init_window.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2<br />
</syntaxhighlight> <br />
<br />
In the above command, we are generating a an output executable file named <code>test_window</code> and giving all the needed libraries for compilation.<br />
<br><br />
<br />
'''6'''. Now, we need to define some environment variables needed to display in our monitor. Use the following commands:<br />
<syntaxhighlight lang=bash><br />
export LD_LIBRARY_PATH=/usr/lib:/usr/lib/aarch64-linux-gnu/<br />
export XDG_RUNTIME_DIR=/usr/bin/weston_socket<br />
</syntaxhighlight> <br />
<br><br />
<br />
'''7'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_window<br />
</syntaxhighlight><br />
<br />
You should see a brown window in your monitor, like the one in Figure 2.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_brown_square.jpg|thumb| center | 650px | Figure 2: Brown square created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 6%.<br />
<br><br />
<br />
== Draw a rotating triangle ==<br />
<br />
In this next and final example, we will create a multi-color triangle rotating on its own axis. For this example, we are using the code created by [https://github.com/krh KRH] called <code>simple-egl.c</code>. You can access the code in the following [https://github.com/krh/weston/blob/master/clients/simple-egl.c link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the next [https://github.com/krh/weston/blob/master/clients/simple-egl.c link] to see the source code. Please copy it and paste it in a a new file in your computer. Name this file with the same name: <code>simple-egl.c</code>.<br />
<br><br />
<br />
'''2'''. Move the file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push simple-egl.c /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp simple-egl.c root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''3'''. Now, we need to continue working in the RB5/RB6 board. We will now compile the file using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_triangle simple-egl.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2 -lm -lwayland-cursor<br />
</syntaxhighlight><br />
<br />
In the above command, we are generating a an output executable file named <code>test_triangle</code> and giving all the needed libraries for compilation.<br />
<br />
'''4'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_triangle<br />
</syntaxhighlight><br />
<br />
You should see a multi-color triangle rotating in your monitor, like the one in Figure 3.<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_color_triangle.png|thumb| center | 650px | Figure 3: Multi-color rotating triangle created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 5%.<br />
<br><br />
<br />
== Results ==<br />
<br />
We already saw how to measure the GPU utilisation percentage and three examples running on it. In table 1, we have a summary of each example and the percentage of GPU it is using.<br />
<br><br />
<br />
{| class="wikitable" style="margin: auto;"<br />
|+ Table 1: GPU utilization percentage for OpenGL examples..<br />
|-<br />
! Test case<br />
! GPU (%)<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#GStreamer_pipeline|GStreamer pipeline]]<br />
| 8<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_square|Color window]]<br />
| 6<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_rotating_triangle|Multi-color rotating triangle]]<br />
| 5<br />
|}<br />
<br />
==References==<br />
<references/><br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Image_Processing_Software|Image_Processing_Software/Fast_CV}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling&diff=53745Qualcomm Robotics RB5/Image Processing Software/GPU Profiling2024-03-27T18:01:53Z<p>Spalli: /* Draw a rotating triangle */</p>
<hr />
<div>{{Qualcomm Robotics RB5/Head|previous=Image_Processing_Software|next=Image_Processing_Software/Fast_CV|keywords=gpu,profiler,opengl}}<br />
</noinclude><br />
__TOC__<br />
{{DISPLAYTITLE:Qualcomm Robotics RB5/RB6 - GPU Profiling|noerror}}<br />
<br />
In this section we will see how to work with the Qualcomm Robotics RB5's GPU. The RB5/RB6 has a Qualcomm Adreno 650 GPU as one of its hardware processing units. We will see how to measure it usage percentage and how to run some examples in the GPU. For the latter, we will use [https://www.opengl.org/ OpenGL] with GStreamer and two compiled examples. OpenGL is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering <ref name="OpenGL">OpenGL Official Page. Retrieved March 28, 2023, from [https://en.wikipedia.org/wiki/OpenGL]</ref>.<br />
<br><br />
<br />
== Profile GPU ==<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=The SnapDragon profiler tool from Qualcomm only works for Android systems. It does not give GPU information from the RB5/RB6.<br />
|style=width:unset;<br />
}}<br />
<br />
Profiling the GPU means we will see how much of the GPU is being used by our applications. This allows for a better analysis of how our applications is working. The method we are using works for RB5/RB6 boards flashed with Ubuntu Linux.<br />
<br />
To measure the GPU usage percentage, we are going to check a node from the system. The node is in the following direction: <code>/sys/class/kgsl/kgsl-3d0/gpu_busy_percentage</code>. To check the value of the GPU, you can use the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
If you want to measure constantly the GPU, you can use the <code>watch</code> command, like the following:<br />
<syntaxhighlight lang=bash><br />
watch -n 1 cat /sys/class/kgsl/kgsl-3d0/gpu_busy_percentage<br />
</syntaxhighlight><br />
<br />
Where the value is getting updated every second. Now, lets try some examples and measure the GPU!<br />
<br><br />
<br />
== GStreamer pipeline ==<br />
<br />
The first example we are using is a GStreamer pipeline. To run it in the GPU, we are using OpenGL plugins. The OpenGL plugins are from the Plugins base of GStreamer, so you only need to install GStreamer. The RB5/RB6 board flashed with a LU image from Thundercomm, already comes with GStreamer and the OpenGL plugins preinstalled. The pipeline we are using is the next one:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-launch-1.0 videotestsrc is-live=true ! "video/x-raw,width=1920,height=1080,framerate=30/1" ! videoconvert ! glupload ! gleffects_fisheye ! glvideoflip video-direction=1 ! gleffects_sepia ! gldownload ! queue ! videoconvert ! waylandsink sync=false<br />
</syntaxhighlight><br />
<br />
In the pipeline above, we are using as our source element <code>videotestsrc</code>, that will generate a video. Then, we define the caps of the video and use videoconvert to transform the format of the data to one that the OpenGL environment supports. To enter the OpenGL environment, we need to use the <code>glupload</code> element. Inside this environment, we can later use all of the available OpenGL Plugins, that you can check with the following command:<br />
<br />
<syntaxhighlight lang=bash><br />
gst-inspect-1.0 | grep opengl<br />
</syntaxhighlight><br />
<br />
In our case, we are testing three elements: gleffects_fisheye, glvideoflip, and gleffects_sepia. The first one, applies a fisheye effect to the video, then the glvideoflip element is rotates the video 90 degrees clockwise. Finally, the gleffects_sepia element applies a Sepia Toning effect. We then use the <code>gldownload</code> element to come back to the GStreamer environment, and finally display the output to a monitor with <code>waylandsink</code>. In Figure 1, you can see the expected output.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:Gstreamer_opengl_pipeline.jpg|thumb| center | 400px | Figure 1: GStreamer pipeline output with OpenGL Plugins transforming Videotestsrc.]]<br />
|}<br />
<br><br />
<br />
If we measure the GPU while running the pipeline the usage percentage is of 8%. This shows that out pipeline is in fact running on th GPU.<br />
<br><br />
<br />
== Draw a square ==<br />
In this next example, we are creating and displaying a colored window in our monitor. This example uses EGL and wayland protocol. The example we are using was created by [https://gist.github.com/Miouyouyou Miouyouyou], and you can get the source files from the following [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the [https://gist.github.com/Miouyouyou/ca15af1c7f2696f66b0e013058f110b4 link] above, where you will find three files: <code>init_window.c</code>, <code>init_window.h</code>, and <code>log.h</code>.<br />
<br><br />
<br />
'''2'''. Now, you need to download the files. You can click the '''Download ZIP''' button, it will download a zip file with the the source code. In your host computer, you can open a terminal and check the downloaded file.<br />
<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/Downloads/OpenGL_draw_square$ ls<br />
ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
</syntaxhighlight><br />
<br />
The downloaded file should have name similar to the above.<br />
<br><br />
<br />
'''3'''. Move the zip file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''4'''. Now, the next steps will be using your Qualcomm Robotics RB5/RB6 board. You can access it with [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. Unzip the file and change the directory name. Use the following commands:<br />
<br />
<syntaxhighlight lang=bash><br />
unzip ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f.zip<br />
mv ca15af1c7f2696f66b0e013058f110b4-af380e5baabecc70d890d41b81ce87b5951dc48f window_example<br />
cd window_example<br />
</syntaxhighlight><br />
<br><br />
<br />
Inside the directory, you should have the following files:<br />
<syntaxhighlight lang=bash><br />
user@desktop:~/work/OpenGL/window_example$ ls<br />
init_window.c init_window.h log.h<br />
</syntaxhighlight><br />
<br><br />
<br />
'''5'''. Compile the program using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_window init_window.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2<br />
</syntaxhighlight> <br />
<br />
In the above command, we are generating a an output executable file named <code>test_window</code> and giving all the needed libraries for compilation.<br />
<br><br />
<br />
'''6'''. Now, we need to define some environment variables needed to display in our monitor. Use the following commands:<br />
<syntaxhighlight lang=bash><br />
export LD_LIBRARY_PATH=/usr/lib:/usr/lib/aarch64-linux-gnu/<br />
export XDG_RUNTIME_DIR=/usr/bin/weston_socket<br />
</syntaxhighlight> <br />
<br><br />
<br />
'''7'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_window<br />
</syntaxhighlight><br />
<br />
You should see a brown window in your monitor, like the one in Figure 2.<br />
<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_brown_square.jpg|thumb| center | 650px | Figure 2: Brown square created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 6%.<br />
<br><br />
<br />
== Draw a rotating triangle ==<br />
<br />
In this next and final example, we will create a multi-color triangle rotating on its own axis. For this example, we are using the code created by [https://github.com/krh KRH] called <code>simple-egl.c</code>. You can access the code in the following [https://github.com/krh/weston/blob/master/clients/simple-egl.c link]. The firsts steps are done in your host computer.<br />
<br />
'''1'''. Enter the next [https://github.com/krh/weston/blob/master/clients/simple-egl.c link] to see the source code. Please copy it and paste it in a a new file in your computer. Name this file with the same name: <code>simple-egl.c</code>.<br />
<br><br />
<br />
'''2'''. Move the file to the board using [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|ADB]] or [[Qualcomm_Robotics_RB5/Development_in_the_Board/Getting_into_the_Board/Using_adb|SSH]]. If you are using ADB, use the following command:<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue=Change the <code>/work/directory/path/</code> path to the one you use for developing in your board.<br />
|style=width:unset;<br />
}}<br />
<br />
<syntaxhighlight lang=bash><br />
adb push simple-egl.c /work/directory/path/<br />
</syntaxhighlight><br />
<br />
If you are using SSH, you can use the following command:<br />
<syntaxhighlight lang=bash><br />
scp simple-egl.c root@192.168.1.1:/work/directory/path/<br />
</syntaxhighlight><br />
<br><br />
<br />
'''3'''. Now, we need to continue working in the RB5/RB6 board. We will now compile the file using the following command:<br />
<syntaxhighlight lang=bash><br />
gcc -o test_triangle simple-egl.c -I. -lwayland-client -lwayland-server -lwayland-client -lwayland-egl -lEGL -lGLESv2 -lm -lwayland-cursor<br />
</syntaxhighlight><br />
<br />
In the above command, we are generating a an output executable file named <code>test_triangle</code> and giving all the needed libraries for compilation.<br />
<br />
'''4'''. Finally, we can execute the program! Use the next command:<br />
<syntaxhighlight lang=bash><br />
./test_triangle<br />
</syntaxhighlight><br />
<br />
You should see a multi-color triangle rotating in your monitor, like the one in Figure 3.<br />
{|class="wikitable" style="margin: auto;"<br />
|-<br />
| [[File:EGL_color_triangle.png|thumb| center | 650px | Figure 3: Multi-color rotating triangle created with EGL.]]<br />
|}<br />
<br><br />
<br />
When running the example, the GPU usage percentage is of 5%.<br />
<br><br />
<br />
== Results ==<br />
<br />
We already saw how to measure the GPU utilization percentage and three examples running on it. In table 1, we have a summary of each example and the percentage of GPU it is using.<br />
<br />
{| class="wikitable" style="margin: auto;"<br />
|+ Table 1: GPU utilization percentage for OpenGL examples..<br />
|-<br />
! Test case<br />
! GPU (%)<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#GStreamer_pipeline|GStreamer pipeline]]<br />
| 8<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_square|Color window]]<br />
| 6<br />
|-<br />
| [[Qualcomm_Robotics_RB5/Image_Processing_Software/GPU_Profiling#Draw_a_rotating_triangle|Multi-color rotating triangle]]<br />
| 5<br />
|}<br />
<br />
==References==<br />
<references/><br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Image_Processing_Software|Image_Processing_Software/Fast_CV}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Demos/Drone_Demo&diff=53744Qualcomm Robotics RB5/Demos/Drone Demo2024-03-27T17:57:12Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Qualcomm Robotics RB5/Head|previous=Demos|next=Demos/Smart_Camera|metakeywords=ttflite}}<br />
</noinclude><br />
<br />
== Introduction ==<br />
<br />
This demo consists of a simulated drone system on a Qualcomm Robotics RB5 that performs live capture, image processing, sensor readings, metadata, encoding and network transmission. Some of the key components it explores include:<br />
<br />
* PTZR for dynamic control of the imager<br />
* Accelerometer readings embedded as metadata in the stream<br />
* H265 encoding<br />
* RTSP streaming<br />
* A media server for media management<br />
<br />
<br />
The following image summaries the overall functionality.<br />
<br />
[[File:Drone-demo-diagram.png|frame|center]]<br />
<br />
{{Review|Please, add a section for: Getting the Full Demo Code, and add a link to the Contact Us section bringing some details like: platform, OS, brief description of the use case. Please, also add that the demo is only available for Qualcomm RB5 and Linux Ubuntu 20.04|lleon}}<br />
<br />
== Dependencies ==<br />
In order to accomplish the desired behaviour, we use some of our products with out-of-the-box functionalities that significantly reduce development time. For this demo, we will be using:<br />
<br />
* [[GStreamer_Pan_Tilt_Zoom_and_Rotate_Element | GstPTZR]]: for digital control of the imager<br />
* [[GstSEIMetadata | GstSEIMetadata]]: for sensor reading and metadata<br />
* [[GStreamer_Daemon | Gstd]]: for media server management<br />
* [[GstRtspSink | RTSPSink]]: for media transmission.<br />
<br />
<br />
{{Review|Add a brief text indicating that the demo code includes evaluation version of these dependencies. If a professional version is required, ask to contact us|lleon}}<br />
<br />
{{Review|Add a installation part. You can find it in the project's readme|lleon}}<br />
<br />
== Run the demo ==<br />
<br />
Once you get the code, you should see the following files:<br />
<pre><br />
├── requirements.txt<br />
├── drone_demo.py <br />
├── rrdronedemo<br />
│ ├── __init__.py<br />
│ ├── argumenthandler.py<br />
│ ├── imu.py<br />
│ ├── pipelineentity.py<br />
│ └── tcp_server.py<br />
</pre><br />
<br />
The demo can be easily run with the following command:<br />
<pre><br />
python3 drone_demo.py<br />
</pre><br />
<br />
{{Review|Can it be executed out of the box?|lleon}}<br />
<br />
=== Application parameters ===<br />
<br />
For a flexible configuration, we exposed the following arguments which can be used to tune the application. This includes:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Option<br />
! Type<br />
! Description<br />
|-<br />
| --rotate_level <br />
| float<br />
| Rotation (in degrees) about the depth axis with origin on the center of the capture region in the range [-180.0, 180.0]<br />
|-<br />
| --tilt_level <br />
| float<br />
| Translation about the y axis starting from the center of the input image in the range [-image height, image height]<br />
|-<br />
| --pan_level <br />
| float<br />
| Translation about the x axis starting from the center of the input image in the range [-image width, image width]<br />
|-<br />
| --zoom_level <br />
| float<br />
| Zoom level<br />
|-<br />
| --resolution <br />
| string<br />
| Camera input resolution {720p,1080p,4k}<br />
|-<br />
| --control_rate <br />
| int<br />
| Bitrate control method: (0) Disable (1) Constant (2) CBR-VFR (3) VBR-CFR (4) VBR-VFR (5) CQ<br />
|-<br />
| --bitrate <br />
| unsigned int<br />
| Target bitrate in bits per second (0 is no specific bitrate)<br />
|-<br />
| --idr_interval <br />
| unsigned int<br />
| IDR frame interval (0 means no specific IDR)<br />
|-<br />
| --rtsp-port <br />
| unsigned int<br />
| RTSP port<br />
|-<br />
| --mapping<br />
| string<br />
| RTSP mappings<br />
|-<br />
| --auth <br />
| string<br />
| Authentication and authorization. Format: user1:password1<br />
|-<br />
| --tcp-port<br />
| unsigned int<br />
| TCP server port for receiving commands<br />
|}<br />
<br />
{{Review|An example would be nice|lleon}}<br />
<br />
=== Control through TCP ===<br />
This demo allows the user to control the media server through a TCP connection, which can be established using the following command:<br />
<pre><br />
nc <IP_ADDRESS> <TCP_PORT><br />
</pre><br />
<br />
{{Review|Where should I execute this? It would be nice to put a background colour on the boxes to indicate whether the command is executed on the RB5 or on another machine|lleon}}<br />
<br />
The supported commands include:<br />
<br />
'''For PTZR control:<br />
'''<br />
<br />
* zoom<br />
* rotate<br />
* tilt<br />
* pan_level<br />
<br />
'''For the media server:<br />
'''<br />
<br />
* play<br />
* pause<br />
* stop<br />
</pre><br />
<br />
'''Sample usage:<br />
'''<br />
<pre><br />
nc 192.168.23.180 9999<br />
zoom=5<br />
rotate=1<br />
tilt=3<br />
pan_level=4<br />
play<br />
pause<br />
stop<br />
</pre><br />
<br />
=== Capture metadata ===<br />
Finally, if you want to inspect the received metadata, you can use the seiextract element as follows:<br />
<br />
<pre><br />
GST_DEBUG=*seiextract*:MEMDUMP gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:5000/stream1 ! rtph265depay ! video/x-h265 ! seiextract ! h265parse ! avdec_h265 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
{{Review|Indicate where I should execute this code|lleon}}<br />
<br />
{{Review|Add a section: Interested in this and more? Contact us ... And summarise a bit our services and a link to contact us|lleon}}<br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Demos|Demos/Smart_Camera}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Demos/Drone_Demo&diff=53743Qualcomm Robotics RB5/Demos/Drone Demo2024-03-27T17:55:45Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Qualcomm Robotics RB5/Head|previous=Demos|next=Demos/Smart_Camera|metakeywords=ttflite}}<br />
</noinclude><br />
<br />
== Introduction ==<br />
<br />
This demo consists of a simulated drone system on a Qualcomm Robotics RB5 that performs live capture, image processing, sensor readings, metadata, encoding and network transmission. Some of the key components it explores include:<br />
<br />
* PTZR for dynamic control of the imager<br />
* Accelerometer readings embedded as metadata in the stream<br />
* H265 encoding<br />
* RTSP streaming<br />
* A media server for media management<br />
<br />
<br />
The following image summaries the overall functionality.<br />
<br />
[[File:Drone-demo-diagram.png|frame|center]]<br />
<br />
{{Review|Please, add a section for: Getting the Full Demo Code, and add a link to the Contact Us section bringing some details like: platform, OS, brief description of the use case. Please, also add that the demo is only available for Qualcomm RB5 and Linux Ubuntu 20.04|lleon}}<br />
<br />
= Dependencies =<br />
In order to accomplish the desired behaviour, we use some of our products with out-of-the-box functionalities that significantly reduce development time. For this demo, we will be using:<br />
<br />
* [[GStreamer_Pan_Tilt_Zoom_and_Rotate_Element | GstPTZR]]: for digital control of the imager<br />
* [[GstSEIMetadata | GstSEIMetadata]]: for sensor reading and metadata<br />
* [[GStreamer_Daemon | Gstd]]: for media server management<br />
* [[GstRtspSink | RTSPSink]]: for media transmission.<br />
<br />
<br />
{{Review|Add a brief text indicating that the demo code includes evaluation version of these dependencies. If a professional version is required, ask to contact us|lleon}}<br />
<br />
{{Review|Add a installation part. You can find it in the project's readme|lleon}}<br />
<br />
= Run the demo =<br />
<br />
Once you get the code, you should see the following files:<br />
<pre><br />
├── requirements.txt<br />
├── drone_demo.py <br />
├── rrdronedemo<br />
│ ├── __init__.py<br />
│ ├── argumenthandler.py<br />
│ ├── imu.py<br />
│ ├── pipelineentity.py<br />
│ └── tcp_server.py<br />
</pre><br />
<br />
The demo can be easily run with the following command:<br />
<pre><br />
python3 drone_demo.py<br />
</pre><br />
<br />
{{Review|Can it be executed out of the box?|lleon}}<br />
<br />
== Application parameters ==<br />
<br />
For a flexible configuration, we exposed the following arguments which can be used to tune the application. This includes:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Option<br />
! Type<br />
! Description<br />
|-<br />
| --rotate_level <br />
| float<br />
| Rotation (in degrees) about the depth axis with origin on the center of the capture region in the range [-180.0, 180.0]<br />
|-<br />
| --tilt_level <br />
| float<br />
| Translation about the y axis starting from the center of the input image in the range [-image height, image height]<br />
|-<br />
| --pan_level <br />
| float<br />
| Translation about the x axis starting from the center of the input image in the range [-image width, image width]<br />
|-<br />
| --zoom_level <br />
| float<br />
| Zoom level<br />
|-<br />
| --resolution <br />
| string<br />
| Camera input resolution {720p,1080p,4k}<br />
|-<br />
| --control_rate <br />
| int<br />
| Bitrate control method: (0) Disable (1) Constant (2) CBR-VFR (3) VBR-CFR (4) VBR-VFR (5) CQ<br />
|-<br />
| --bitrate <br />
| unsigned int<br />
| Target bitrate in bits per second (0 is no specific bitrate)<br />
|-<br />
| --idr_interval <br />
| unsigned int<br />
| IDR frame interval (0 means no specific IDR)<br />
|-<br />
| --rtsp-port <br />
| unsigned int<br />
| RTSP port<br />
|-<br />
| --mapping<br />
| string<br />
| RTSP mappings<br />
|-<br />
| --auth <br />
| string<br />
| Authentication and authorization. Format: user1:password1<br />
|-<br />
| --tcp-port<br />
| unsigned int<br />
| TCP server port for receiving commands<br />
|}<br />
<br />
{{Review|An example would be nice|lleon}}<br />
<br />
== Control through TCP ==<br />
This demo allows the user to control the media server through a TCP connection, which can be established using the following command:<br />
<pre><br />
nc <IP_ADDRESS> <TCP_PORT><br />
</pre><br />
<br />
{{Review|Where should I execute this? It would be nice to put a background colour on the boxes to indicate whether the command is executed on the RB5 or on another machine|lleon}}<br />
<br />
The supported commands include:<br />
<br />
'''For PTZR control:<br />
'''<br />
<br />
* zoom<br />
* rotate<br />
* tilt<br />
* pan_level<br />
<br />
'''For the media server:<br />
'''<br />
<br />
* play<br />
* pause<br />
* stop<br />
</pre><br />
<br />
'''Sample usage:<br />
'''<br />
<pre><br />
nc 192.168.23.180 9999<br />
zoom=5<br />
rotate=1<br />
tilt=3<br />
pan_level=4<br />
play<br />
pause<br />
stop<br />
</pre><br />
<br />
== Capture metadata ==<br />
Finally, if you want to inspect the received metadata, you can use the seiextract element as follows:<br />
<br />
<pre><br />
GST_DEBUG=*seiextract*:MEMDUMP gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:5000/stream1 ! rtph265depay ! video/x-h265 ! seiextract ! h265parse ! avdec_h265 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
{{Review|Indicate where I should execute this code|lleon}}<br />
<br />
{{Review|Add a section: Interested in this and more? Contact us ... And summarise a bit our services and a link to contact us|lleon}}<br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Demos|Demos/Smart_Camera}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Sensors/IMU&diff=53742Qualcomm Robotics RB5/Sensors/IMU2024-03-27T17:50:46Z<p>Spalli: /* Accelerometer + Gyroscope */</p>
<hr />
<div><noinclude><br />
{{Qualcomm Robotics RB5/Head|previous=Sensors|next=Demos|metakeywords=ttflite}}<br />
</noinclude><br />
<br />
__TOC__<br />
== IMU ==<br />
The Qualcomm Robotics RB5 Platform comes with the ICM-42688-P, a 6-axis IMU that combines a 3-axis gyroscope and a 3-axis accelerometer. In this section, we will learn how to extract information on angular velocity, linear acceleration, and timestamp metadata. <br />
<br />
=== Hardware configuration ===<br />
Before moving forward with the software configuration, ensure you have the proper hardware configuration for your setup. More specifically, check the DIP switch settings for your specific board.<br />
<br />
You can verify if the accelerometer is working by using the following binary:<br />
<pre><br />
ssc_drva_test -sensor=accel -duration=5 -sample_rate=50<br />
</pre><br />
You should see an output similar to the following:<br />
<pre><br />
39 ssc_drva_test version 1.13<br />
39 ssc_drva_test -sensor=accel -duration=5 -sample_rate=50 <br />
39 handle_event <br />
39 event_cb attribute event for da_test<br />
39 handle_event <br />
39 event_cb attribute event for da_test<br />
39 using da_test name=da_test, suid = [high addeaddeaddeadde, low addeaddeaddeadde<br />
39 enter send_memory_log_req cookie: 39<br />
39 exit send_memory_log_req<br />
39 enter da_test runner<br />
39 handle_event <br />
39 -time_to_first_event=2260387<br />
39 -time_to_last_event=-147433<br />
39 -sample_ts=179117069042<br />
39 -total_samples=248<br />
39 -avg_delta=377490<br />
39 -recvd_phy_config_sample_rate=50<br />
39 -random_seed_used=2927366375<br />
39 -num_request_sent=2<br />
39 -first_sample_timestamp=179023325647<br />
39 handle_event <br />
39 received event: PASS<br />
39 enter send_memory_log_req cookie: 39<br />
39 exit send_memory_log_req<br />
39 PASS<br />
</pre><br />
<br />
=== Accelerometer + Gyroscope ===<br />
<br />
{{Ambox<br />
|type=notice<br />
|small=left<br />
|issue='''Note:''' You can refer to our Drone Demo for information on how to get a sample code that implements the steps described in this section.<br />
|style=width:unset;<br />
}}<br />
<br />
If you are running the Qualcomm RB5 software environment, you will already have existing IMU interfaces. First, you will need to check if the imud service is running. You can do so by running the following command:<br />
<br />
<pre><br />
$ systemctl status imud<br />
</pre><br />
The output should look similar to this:<br />
<pre><br />
● imud.service - imud Service<br />
Loaded: loaded (/etc/imud.sh; static; vendor preset: enabled)<br />
Active: active (running) since Wed 2024-03-20 16:35:42 UTC; 2h 50min ago<br />
Process: 876 ExecStart=/etc/imud.sh start (code=exited, status=0/SUCCESS)<br />
Main PID: 891 (imud)<br />
Tasks: 4 (limit: 6291)<br />
Memory: 4.9M<br />
CGroup: /system.slice/imud.service<br />
└─891 /sbin/imud<br />
</pre><br />
<br />
This service provides an interface for sending commands to the IMU through sockets, specifically the '''/run/imud_socket'''. <br />
<br />
''' Initialization '''<br />
<br />
Once you establish the connection to the socket, you can configure your IMU. Each sensor has a specific ID and a configurable sample rate. <br />
<br />
{| class="wikitable"<br />
|-<br />
! Sensor<br />
! Sensor ID<br />
! Sample rate<br />
|-<br />
| Accelerometer<br />
| 0<br />
| 1-1000 Hz<br />
|-<br />
| Gyroscope<br />
| 1<br />
| 1-1000 Hz<br />
|}<br />
<br />
Each message also has a specific ID. The initialization process requires the following commands:<br />
{| class="wikitable"<br />
|-<br />
! Command<br />
! Command ID<br />
|-<br />
| START<br />
| 2<br />
|-<br />
| STOP<br />
| 3<br />
|-<br />
| CONFIG RATE<br />
| 12<br />
|-<br />
| CONFIG DATATYPE<br />
| 13<br />
|}<br />
<br />
* To configure the accelerometer<br />
** Send configuration type message with accelerometer ID (0)<br />
** Send configuration message to configure sample rate (1-1000 Hz)<br />
<br />
* To configure the gyroscope<br />
** Send configuration type message with gyroscope ID (1)<br />
** Send configuration message to configure sample rate (1-1000 Hz)<br />
<br />
* To start streaming data<br />
** Send start command with accelerometer ID<br />
** Send start command with gyroscope ID<br />
<br />
Now that we are producing data, the second interface we will need is memory mapping the IMU output. The /run/imu_map acts as an interface to access sensor data. By using memory mapping techniques, we access the sensor's output data. The incoming data is structured as follows:<br />
<br />
<br />
{| class="wikitable"|<br />
|-<br />
! IMU Output data<br />
|-<br />
| <br />
Acceleration X (float)<br />
Acceleration Y (float)<br />
Acceleration Z (float)<br />
Accel timestamp (unsigned int 64)<br />
<br />
Angular velocity X (float)<br />
Angular velocity Y (float)<br />
Angular velocity Z (float)<br />
Gyro timestamp (unsigned int 64)<br />
|}<br />
<br />
{{Review|Can we add a link to the sample application you were using and how to compile it? I think it can be useful for some people|lleon}}<br />
<br />
{{Review|I would also add a section to market us: Looking for help in sensor's bring up? and put the link to Contact Us|lleon}}<br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Sensors|Demos}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Qualcomm_Robotics_RB5/Sensors/IMU&diff=53741Qualcomm Robotics RB5/Sensors/IMU2024-03-27T17:48:47Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Qualcomm Robotics RB5/Head|previous=Sensors|next=Demos|metakeywords=ttflite}}<br />
</noinclude><br />
<br />
__TOC__<br />
== IMU ==<br />
The Qualcomm Robotics RB5 Platform comes with the ICM-42688-P, a 6-axis IMU that combines a 3-axis gyroscope and a 3-axis accelerometer. In this section, we will learn how to extract information on angular velocity, linear acceleration, and timestamp metadata. <br />
<br />
=== Hardware configuration ===<br />
Before moving forward with the software configuration, ensure you have the proper hardware configuration for your setup. More specifically, check the DIP switch settings for your specific board.<br />
<br />
You can verify if the accelerometer is working by using the following binary:<br />
<pre><br />
ssc_drva_test -sensor=accel -duration=5 -sample_rate=50<br />
</pre><br />
You should see an output similar to the following:<br />
<pre><br />
39 ssc_drva_test version 1.13<br />
39 ssc_drva_test -sensor=accel -duration=5 -sample_rate=50 <br />
39 handle_event <br />
39 event_cb attribute event for da_test<br />
39 handle_event <br />
39 event_cb attribute event for da_test<br />
39 using da_test name=da_test, suid = [high addeaddeaddeadde, low addeaddeaddeadde<br />
39 enter send_memory_log_req cookie: 39<br />
39 exit send_memory_log_req<br />
39 enter da_test runner<br />
39 handle_event <br />
39 -time_to_first_event=2260387<br />
39 -time_to_last_event=-147433<br />
39 -sample_ts=179117069042<br />
39 -total_samples=248<br />
39 -avg_delta=377490<br />
39 -recvd_phy_config_sample_rate=50<br />
39 -random_seed_used=2927366375<br />
39 -num_request_sent=2<br />
39 -first_sample_timestamp=179023325647<br />
39 handle_event <br />
39 received event: PASS<br />
39 enter send_memory_log_req cookie: 39<br />
39 exit send_memory_log_req<br />
39 PASS<br />
</pre><br />
<br />
=== Accelerometer + Gyroscope ===<br />
<br />
'''Note:''' You can refer to our Drone Demo for information on how to get a sample code that implements the steps described in this section.<br />
<br />
If you are running the Qualcomm RB5 software environment, you will already have existing IMU interfaces. First, you will need to check if the imud service is running. You can do so by running the following command:<br />
<br />
<pre><br />
$ systemctl status imud<br />
</pre><br />
The output should look similar to this:<br />
<pre><br />
● imud.service - imud Service<br />
Loaded: loaded (/etc/imud.sh; static; vendor preset: enabled)<br />
Active: active (running) since Wed 2024-03-20 16:35:42 UTC; 2h 50min ago<br />
Process: 876 ExecStart=/etc/imud.sh start (code=exited, status=0/SUCCESS)<br />
Main PID: 891 (imud)<br />
Tasks: 4 (limit: 6291)<br />
Memory: 4.9M<br />
CGroup: /system.slice/imud.service<br />
└─891 /sbin/imud<br />
</pre><br />
<br />
This service provides an interface for sending commands to the IMU through sockets, specifically the '''/run/imud_socket'''. <br />
<br />
''' Initialization '''<br />
<br />
Once you establish the connection to the socket, you can configure your IMU. Each sensor has a specific ID and a configurable sample rate. <br />
<br />
{| class="wikitable"<br />
|-<br />
! Sensor<br />
! Sensor ID<br />
! Sample rate<br />
|-<br />
| Accelerometer<br />
| 0<br />
| 1-1000 Hz<br />
|-<br />
| Gyroscope<br />
| 1<br />
| 1-1000 Hz<br />
|}<br />
<br />
Each message also has a specific ID. The initialization process requires the following commands:<br />
{| class="wikitable"<br />
|-<br />
! Command<br />
! Command ID<br />
|-<br />
| START<br />
| 2<br />
|-<br />
| STOP<br />
| 3<br />
|-<br />
| CONFIG RATE<br />
| 12<br />
|-<br />
| CONFIG DATATYPE<br />
| 13<br />
|}<br />
<br />
* To configure the accelerometer<br />
** Send configuration type message with accelerometer ID (0)<br />
** Send configuration message to configure sample rate (1-1000 Hz)<br />
<br />
* To configure the gyroscope<br />
** Send configuration type message with gyroscope ID (1)<br />
** Send configuration message to configure sample rate (1-1000 Hz)<br />
<br />
* To start streaming data<br />
** Send start command with accelerometer ID<br />
** Send start command with gyroscope ID<br />
<br />
Now that we are producing data, the second interface we will need is memory mapping the IMU output. The /run/imu_map acts as an interface to access sensor data. By using memory mapping techniques, we access the sensor's output data. The incoming data is structured as follows:<br />
<br />
<br />
{| class="wikitable"|<br />
|-<br />
! IMU Output data<br />
|-<br />
| <br />
Acceleration X (float)<br />
Acceleration Y (float)<br />
Acceleration Z (float)<br />
Accel timestamp (unsigned int 64)<br />
<br />
Angular velocity X (float)<br />
Angular velocity Y (float)<br />
Angular velocity Z (float)<br />
Gyro timestamp (unsigned int 64)<br />
|}<br />
<br />
{{Review|Can we add a link to the sample application you were using and how to compile it? I think it can be useful for some people|lleon}}<br />
<br />
{{Review|I would also add a section to market us: Looking for help in sensor's bring up? and put the link to Contact Us|lleon}}<br />
<br />
<noinclude><br />
{{Qualcomm Robotics RB5/Foot|Sensors|Demos}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Parameters&diff=53708Getting Started with ROS on Embedded Systems/User Guide/C++/Parameters2024-03-26T19:24:13Z<p>Spalli: /* Reading from C++ */</p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=User Guide/C++/Launch_files|next=Examples|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
This wiki is based on: http://wiki.ros.org/Parameter%20Server<br />
<br />
ROS parameter server is a shared dictionary between nodes used to retrieve parameters at runtime, normally used for static data on configuration. It is meant to be global so that tools can read and modify it. <br />
<br />
== Parameters ==<br />
Parameters follow the same naming convention from ROS, in a hierarchical way so that these are accessed as a tree. As an example from the ROS website, you can have the following parameters:<br />
<br />
<syntaxhighlight lang="json"><br />
/camera/left/name: leftcamera<br />
/camera/left/exposure: 1<br />
/camera/right/name: rightcamera<br />
/camera/right/exposure: 1.1<br />
</syntaxhighlight><br />
<br />
You can also get one dictionary with depth 1 from /camera/left:<br />
<br />
<syntaxhighlight lang="json"><br />
name: leftcamera<br />
exposure: 1<br />
</syntaxhighlight><br />
<br />
Or depth 2 from the whole cameras:<br />
<br />
<syntaxhighlight lang="json"><br />
left: { name: leftcamera, exposure: 1 }<br />
right: { name: rightcamera, exposure: 1.1 }<br />
</syntaxhighlight><br />
<br />
== Loading parameters ==<br />
<br />
In the roslaunch example from the launch section [https://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Launch_files#Examples here]<br />
<br />
This will load all the yaml files specified into the parameter server.<br />
<br />
== Reading from C++ ==<br />
<br />
To read the parameters from C++, you can use the following code:<br />
<br />
<syntaxhighlight lang="cpp" line><br />
template <typename T><br />
bool GetParamWithWarning(const ros::NodeHandle& node_handle, const std::string& param, T& dest) {<br />
bool res = false;<br />
std::string actual_param;<br />
if (node_handle.searchParam(param, actual_param)) {<br />
dest = node_handle.param<T>(actual_param, T());<br />
res = true;<br />
} else {<br />
std::cout << "[Warning] Missing parameter: " << param.c_str() << std::endl;<br />
}<br />
return res;<br />
}<br />
</syntaxhighlight><br />
<br />
This will search for the param using the node handle, and store the result in the destination.<br />
<br><br />
{{Getting Started with ROS on Embedded Systems/Foot|User Guide/C++/Launch_files|Examples}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Launch_files&diff=53707Getting Started with ROS on Embedded Systems/User Guide/C++/Launch files2024-03-26T19:23:44Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=User Guide/C++/Publishers_and_subscribers|next=User Guide/C++/Parameters|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
This wiki is based on the following pages: <br />
<br />
# http://wiki.ros.org/roslaunch<br />
# http://wiki.ros.org/roslaunch/XML<br />
<br />
ROS launch files are XML files that used with the roslaunch tools, eases the launching of multiple nodes, with multiple configurations. This makes it easier to have for example more than one node running, you don't need to have the roscore running, and you can ease node grouping, topic renaming, node naming, parameter loading, and many other things by using this tool. <br />
<br />
== Examples ==<br />
<br />
Simple launch file with parameter loading<br />
<br />
<syntaxhighlight lang="xml" line><br />
<br />
<launch><br />
<group ns="group_namespace"><br />
<rosparam command="load" file="$(find node_package)/configuration/node_parameters.yaml" ns="node_parameters_namespace"/><br />
<node name="$(arg node_name)" pkg="node_package" type="listener" required="true" output="screen"/><br />
</group><br />
</launch><br />
<br />
</syntaxhighlight><br />
<br />
# This launch file receives one parameter, the node name as an argument. <br />
# This will create a node under group_namespace/node_name, and all the logs will be output into the caller terminal.<br />
# This should also load the parameters yaml file inside the node_parameters_namespace/ namespace. <br />
# You can add more nodes in this launch file too, the only requirement is that these have different names. <br />
<br />
The launch files should be located inside a launch subfolder inside each package. <br />
<br />
To launch it, you can use <br />
<br />
<syntaxhighlight lang="bash"><br />
roslaunch <package name> <launch file> node_name:=my_node<br />
</syntaxhighlight><br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|User Guide/C++/Publishers_and_subscribers|User Guide/C++/Parameters}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Messages&diff=53706Getting Started with ROS on Embedded Systems/User Guide/C++/Messages2024-03-26T19:18:45Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=User Guide/C++/Topics|next=User Guide/C++/Names|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
This wiki is based on the following ROS page: http://wiki.ros.org/msg<br />
<br />
Messages are the main form of communication in a topic, in which the publisher will push a message of some defined type and the subscriber will use that same kind of message in the reception. <br />
<br />
These messages are normally described in a simplified file that is then used by ROS to generate C++ (and other languages) code with them (which ends up being a simple struct). Message descriptions are located inside the msg/ folder in the package and end with the .msg extension. <br />
<br />
Message types are referred to using package resource names. For example, the file rr_msgs/msg/RRMessage.msg is commonly referred to as rr_msgs/RRMessage.<br />
<br />
== Message description ==<br />
<br />
Messages have two parts in the .msg file: fields and constants<br />
<br />
=== Message fields ===<br />
<br />
Fields are the data sent inside the message, this is defined in the messages as pairs of key and value:<br />
<br />
<pre><br />
type1 name1<br />
type2 name2<br />
type3 name3<br />
</pre><br />
<br />
==== Types ====<br />
<br />
The types can be primitive: <br />
<br />
<pre><br />
bool, int8, uint8, int16, uint16, int32, uint32, int64, uint64, float32, float64, string, time, duration<br />
</pre><br />
<br />
Or they can be arrays too:<br />
<br />
==== Names ====<br />
<br />
The names for the fields determine how to get the data on the target language, just as one would do using structs. Field names must be translated to multiple languages, so their names are restricted to the following pattern: [a-zA-Z][a-zA-Z1-9_]*.<br />
<br />
== Creating a message ==<br />
<br />
To create a message package, you can create a package as in the setting up section. <br />
<br />
Then you will need to add the message generation and message runtime in the package configuration: <br />
<syntaxhighlight lang="xml"><br />
<build_depend>message_generation</build_depend><br />
<exec_depend>message_runtime</exec_depend><br />
</syntaxhighlight><br />
<br />
And you will also need to add to the message CMakeLists.txt:<br />
<br />
<syntaxhighlight lang="cmake" line><br />
find_package(catkin REQUIRED COMPONENTS roscpp std_msgs genmsg message_generation)<br />
add_message_files(FILES RRMessage.msg RRMessage2.msg)<br />
generate_messages(DEPENDENCIES std_msgs)<br />
<br />
###################################<br />
## catkin specific configuration ##<br />
###################################<br />
<br />
include_directories(${catkin_INCLUDE_DIRS})<br />
catkin_package(<br />
CATKIN_DEPENDS message_runtime roscpp std_msgs<br />
)<br />
</syntaxhighlight><br />
<br />
This assumes that we have a RRMessage.msg and RRMessage2.msg on the package msg/ folder.<br />
<br />
== Using message from another nodes ==<br />
<br />
See current topics:<br />
<br />
<syntaxhighlight lang="bash"><br />
rostopic list<br />
</syntaxhighlight><br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|User Guide/C++/Topics|User Guide/C++/Names}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Initialization&diff=53705Getting Started with ROS on Embedded Systems/User Guide/C++/Initialization2024-03-26T19:12:39Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=User Guide/C++/Package set up|next=User Guide/C++/Topics|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
Based on our previous example this section will try to cover the initialization part of the code.<br />
<br />
== Initialization ==<br />
On every ROS2 code we need to call ros and initialize it:<br />
<br />
<syntaxhighlight lang="c++" line><br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
...<br />
</syntaxhighlight><br />
<br />
== Starting ==<br />
<br />
After initialization we actually need to create a node, a basic empty node:<br />
<br />
<syntaxhighlight lang="c++" line><br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher() : Node(NODE_NAME), count_(0)<br />
{<br />
publisher_ = this->create_publisher<MSG_TYPE>(TOPIC, PUB_QUEUE);<br />
}<br />
<br />
private:<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
};<br />
</syntaxhighlight><br />
<br />
* NODE_NAME, the name of the node, it helps to locate it using the ros logger or using the ros cmdline tools.<br />
* TOPIC, the actual topic name where the node will publish its messages.<br />
* PUB_QUEUE, lenght of the publiser queue.<br />
* MSG_TYPE, the data type for the ros messages.<br />
<br />
In this case we have a simple timer base publisher that publishes an increasing counter every 500ms, but most importantly this is the line needed to publish data:<br />
<br />
<syntaxhighlight lang="c++"><br />
publisher_->publish(message);<br />
</syntaxhighlight><br />
<br />
== Running the node ==<br />
ROS executables have a main loop, and to actually run anything, we need to attach a node to it, to do so we have the following apis:<br />
<br />
* rclcpp::spin(std::make_shared<MinimalPublisher>()): run forever.<br />
* rclcpp::spin_some(std::make_shared<MinimalPublisher>()): run once and exit.<br />
* rclcpp::spin_until_future_complete: checks for a condition to be met, and after that condition is true it stops and exits.<br />
<br />
== Running multiples nodes ==<br />
<br />
It's possible to run multiple nodes on the same executable, to do so you can use a MultiThreadedExecutor and add N nodes to it like:<br />
<br />
<syntaxhighlight lang="c++"><br />
rclcpp::executors::MultiThreadedExecutor executor;<br />
</syntaxhighlight><br />
<br />
To do so, we can use the following code for a dual node publisher example:<br />
<br />
<syntaxhighlight lang="c++" line><br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher(std::string node_name, std::string topic_name) : Node(node_name)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>(topic_name, 10);<br />
timer_ = this->create_wall_timer(500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
count_ = 0;<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::executors::MultiThreadedExecutor executor;<br />
auto node1 = std::make_shared<MinimalPublisher>("PUB_1", "TOPIC_1");<br />
auto node2 = std::make_shared<MinimalPublisher>("PUB_2", "TOPIC_2");<br />
executor.add_node(node1);<br />
executor.add_node(node2);<br />
<br />
executor.spin();<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
And when executing we can see both nodes sending data:<br />
<br />
<syntaxhighlight lang="c++" line><br />
^C[INFO] [1710200815.973955444] [rclcpp]: signal_handler(signum=2)<br />
root@vision:/test/build/test# ./talker <br />
[INFO] [1710200998.614902468] [PUB_1]: Publishing: 'Hello, world! 0'<br />
[INFO] [1710200998.623128763] [PUB_2]: Publishing: 'Hello, world! 0'<br />
[INFO] [1710200999.114898206] [PUB_1]: Publishing: 'Hello, world! 1'<br />
[INFO] [1710200999.123280536] [PUB_2]: Publishing: 'Hello, world! 1'<br />
[INFO] [1710200999.614873945] [PUB_1]: Publishing: 'Hello, world! 2'<br />
[INFO] [1710200999.623183985] [PUB_2]: Publishing: 'Hello, world! 2'<br />
[INFO] [1710201000.114919412] [PUB_1]: Publishing: 'Hello, world! 3'<br />
</syntaxhighlight><br />
<br />
If needed we can mix publisher and receiver nodes on the same code, using the same approach.<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|User Guide/C++/Package set up|User Guide/C++/Topics}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Package_set_up&diff=53704Getting Started with ROS on Embedded Systems/User Guide/C++/Package set up2024-03-26T19:09:03Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=C++ User Guide|next=User Guide/C++/Initialization|metakeywords=ROS}}<br />
<br />
__TOC__<br />
<br />
== Introduction ==<br />
This serves an introduction on how to create and build simple packages using ROS with colcon build system. <br />
<br />
A ROS package is simply one folder located under a workspace that can be constructed by using a package manager, for example colcon or catkin. This guide will use colcon build system.<br />
<br />
A package can be created using the ros2 command, like:<br />
<br />
<syntaxhighlight lang="bash"><br />
ros2 pkg create --license Apache-2.0 <pkg-name> --dependencies [deps]<br />
</syntaxhighlight><br />
<br />
A project setup for colcon generally will look like:<br />
<br />
<pre><br />
root@vision:/test# tree .<br />
.<br />
├── CMakeLists.txt<br />
├── include<br />
│ └── test<br />
├── LICENSE<br />
├── package.xml<br />
└── src<br />
<br />
3 directories, 3 files<br />
</pre><br />
<br />
To test our test package we can build it:<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /test<br />
colcon build<br />
</syntaxhighlight><br />
<br />
If all goes well, you should see:<br />
<br />
<syntaxhighlight lang="bash"><br />
root@vision:/test# colcon build<br />
Starting >>> test <br />
Finished <<< test [5.37s] <br />
<br />
Summary: 1 package finished [5.74s]d<br />
</syntaxhighlight><br />
<br />
=== Sample publisher ===<br />
We are going to make a simple text publisher and reciever, using the following [https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html guide].<br />
The first step on configuring the package, is to modify the package.xml, you will have entries similar to the following:<br />
<br />
<syntaxhighlight lang="xml" line><br />
<?xml version="1.0"?><br />
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?><br />
<package format="3"><br />
<name>test</name><br />
<version>0.0.0</version><br />
<description>TODO: Package description</description><br />
<maintainer email="root@todo.todo">root</maintainer><br />
<license>Apache-2.0</license><br />
<br />
<buildtool_depend>ament_cmake</buildtool_depend><br />
<br />
<test_depend>ament_lint_auto</test_depend>executables<br />
<test_depend>ament_lint_common</test_depend><br />
<br />
<export><br />
<build_type>ament_cmake</build_type><br />
</export><br />
</package><br />
</syntaxhighlight><br />
<br />
Now replace the TODO's with your info. Then add the following dependencies:<br />
<br />
<syntaxhighlight lang="xml" line><br />
<depend>rclcpp</depend><br />
<depend>std_msgs</depend><br />
</syntaxhighlight><br />
<br />
Now lets add the publisher sources, create the file "src/sample_pub.cpp" and add the following:<br />
<br />
<syntaxhighlight lang="c++" line><br />
#include <chrono><br />
#include <functional><br />
#include <memory><br />
#include <string><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
<br />
using namespace std::chrono_literals;<br />
<br />
/* This example creates a subclass of Node and uses std::bind() to register a<br />
* member function as a callback from the timer. */<br />
<br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher()<br />
: Node("minimal_publisher"), count_(0)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);<br />
timer_ = this->create_wall_timer(<br />
500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The code has 3 main sections:<br />
* ROS initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::init(argc, argv);<br />
</syntaxhighlight><br />
<br />
* Node declaration, where we declare our ros node, that extends from the base rclcpp::Node class.<br />
<br />
* Main thread initialization:<br />
<syntaxhighlight lang="c++" line><br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
</syntaxhighlight><br />
Now we can add our listener, create the file "src/sample_listener.cpp" and add the following:<br />
<syntaxhighlight lang="c++" line><br />
#include <memory><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
using std::placeholders::_1;<br />
<br />
class MinimalSubscriber : public rclcpp::Node<br />
{<br />
public:<br />
MinimalSubscriber()<br />
: Node("minimal_subscriber")<br />
{<br />
subscription_ = this->create_subscription<std_msgs::msg::String>(<br />
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
}<br />
<br />
private:<br />
void topic_callback(const std_msgs::msg::String & msg) const<br />
{<br />
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg.data.c_str());<br />
}<br />
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalSubscriber>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
We have a similar structure to the publisher code, with the main three sections, but the node is set to listen to a topic, instead of writing to it with:<br />
<br />
<syntaxhighlight lang="c++"><br />
create_subscription<std_msgs::msg::String>("topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
</syntaxhighlight><br />
<br />
Where the node will listen to "topic".<br />
</br><br />
After that we can also take a look at the CMakeList.txt file:<br />
<br />
<syntaxhighlight lang="cmake" line><br />
cmake_minimum_required(VERSION 3.8)<br />
project(test)<br />
<br />
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")<br />
add_compile_options(-Wall -Wextra -Wpedantic)<br />
endif()<br />
<br />
# find dependencies<br />
find_package(ament_cmake REQUIRED)<br />
# uncomment the following section in order to fill in<br />
# further dependencies manually.<br />
# find_package(<dependency> REQUIRED)<br />
<br />
if(BUILD_TESTING)<br />
find_package(ament_lint_auto REQUIRED)<br />
# the following line skips the linter which checks for copyrights<br />
# comment the line when a copyright and license is added to all source files<br />
set(ament_cmake_copyright_FOUND TRUE)<br />
# the following line skips cpplint (only works in a git repo)<br />
# comment the line when this package is in a git repo and when<br />
# a copyright and license is added to all source files<br />
set(ament_cmake_cpplint_FOUND TRUE)<br />
ament_lint_auto_find_test_dependencies()<br />
endif()<br />
<br />
ament_package()<br />
<br />
</syntaxhighlight><br />
<br />
First we add the dependencies on the dependencies section:<br />
<br />
<syntaxhighlight lang="cmake"><br />
find_package(rclcpp REQUIRED)<br />
find_package(std_msgs REQUIRED)<br />
</syntaxhighlight><br />
<br />
Now we can add our sources and set the executable targets, after the find_packages statements:<br />
<br />
<syntaxhighlight lang="cmake" line><br />
# publiser code<br />
add_executable(talker src/publisher_member_function.cpp)<br />
ament_target_dependencies(talker rclcpp std_msgs)<br />
# listener code<br />
add_executable(listener src/sample_listener.cpp)<br />
ament_target_dependencies(listener rclcpp std_msgs)<br />
</syntaxhighlight><br />
<br />
Now we can fetch the dependencies:<br />
<br />
<syntaxhighlight lang="bash"><br />
rosdep install -i --from-path src --rosdistro humble -y<br />
</syntaxhighlight><br />
<br />
After that finishes, we can compile it:<br />
<br />
<syntaxhighlight lang="bash"><br />
colcon build<br />
</syntaxhighlight><br />
<br />
Now we can either use the executable directly on the build/test folder or install them using the install script like ". install/setup.bash".<br />
</br><br />
After that we can open two terminals and execute one executable on each, remember to do "source /ros_entrypoint.sh" on every new terminal that is opened. If we run the talker on one and the listener on the other we will see something like:<br />
<br />
<br><br />
[[File:Ros2 simple example.png|thumbnail|center|780px|alt=Simple listener and publisher example]]<br />
<br><br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|C++ User Guide|User Guide/C++/Initialization}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Performance/Jetson_AGX_Xavier&diff=53703Spherical Video PTZ/Performance/Jetson AGX Xavier2024-03-26T19:01:35Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
== Benchmark environment ==<br />
<br />
The measurements are taken considering the following criteria:<br />
<br />
* Average behaviour: measurements considering typical image processing pipelines.<br />
<br />
== Benchmarking ==<br />
<br />
'''Instruments:'''<br />
<br />
* ''CPU'': RidgeRun Profiler<br />
* ''RAM'': RidgeRun Profiler<br />
* ''GPU'': Jtop<br />
<br />
'''Pipelines:'''<br />
<br />
''Average Behaviour: CPU and RAM (640x480):''<br />
<br />
<pre><br />
<br />
# Without the element <br />
gst-launch-1.0 <br />
<br />
# With the element<br />
gst-launch-1.0<br />
<br />
# Client pipeline<br />
gst-launch-1.0 playbin <br />
<br />
# RidgeRun Profiler<br />
rr-profiler-headless -a -p [process_number] -T 100 > data.txt<br />
<br />
</pre><br />
<br />
<br />
<br />
''Average Behaviour: CPU and RAM (1280x720):''<br />
<br />
<br />
''Average Behaviour: CPU and RAM (1920x1080):''<br />
<br />
<br />
''Average Behaviour: CPU and RAM (3840x2160/4K):''<br />
<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/GstD&diff=53702Spherical Video PTZ/Examples/GstD2024-03-26T18:59:12Z<p>Spalli: </p>
<hr />
<div><br />
<noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
==Using CPU==<br />
* With this pipeline, you can take an input image, dynamically adjust PTZ properties, and save the output to a file sink.<br />
Ensure that the gstd daemon is running in background:<br />
<pre><br />
pipeline_create p1 filesrc location=sample.jpg ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA ,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts<br />
</pre><br />
<br />
* This script enables you to process an equirectangular image, utilizing gstd to perform a horizontal panning effect and encapsulate this into a 15-second video. To use this script, follow these steps:<br />
# Copy the provided code into a file named sample.sh.<br />
# Make sure the <code>gstd</code> daemon is running in the background on your system. This is necessary for the script to function correctly as it relies on gstd to manage the GStreamer pipeline.<br />
# Make the script executable con <code>chmod +x sample.sh</code>.<br />
# Execute the script by running <code>./sample.sh <path_to_your_image.jpg></code>.<br />
<br />
<pre><br />
./sample.sh sample.jpg <br />
</pre><br />
<br />
<syntaxhighlight lang="bash" line><br />
#!/bin/bash<br />
<br />
if [ "$#" -ne 1 ]; then<br />
echo "Usage: $0 <path_to_image>"<br />
exit 1<br />
fi<br />
<br />
image_path="$1"<br />
counter=0<br />
loops=0<br />
duration=15 # Video duration in seconds<br />
frame_interval=0.02 # Time between frames in seconds<br />
total_frames=$(echo "scale=0; $duration / $frame_interval" | bc)<br />
<br />
gst-client pipeline_create p1 "filesrc location=${image_path} ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts"<br />
<br />
gst-client pipeline_play p1<br />
<br />
while [ $loops -lt $total_frames ]; do<br />
gst-client --quiet element_set p1 ptz pan ${counter}<br />
((counter++))<br />
if [ $counter -ge 360 ]; then<br />
counter=0<br />
fi<br />
((loops++))<br />
sleep $frame_interval<br />
done<br />
<br />
gst-client pipeline_stop p1<br />
gst-client pipeline_delete p1<br />
</syntaxhighlight><br />
<br />
You can play the video using:<br />
<pre><br />
gst-launch-1.0 filesrc location=./sample.ts ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/GstD&diff=53701Spherical Video PTZ/Examples/GstD2024-03-26T18:56:48Z<p>Spalli: </p>
<hr />
<div><br />
<noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
==Using CPU==<br />
* With this pipeline, you can take an input image, dynamically adjust PTZ properties, and save the output to a file sink.<br />
Ensure that the gstd daemon is running in background:<br />
<syntaxhighlight><br />
pipeline_create p1 filesrc location=sample.jpg ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA ,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts<br />
</syntaxhighlight><br />
<br />
* This script enables you to process an equirectangular image, utilizing gstd to perform a horizontal panning effect and encapsulate this into a 15-second video. To use this script, follow these steps:<br />
# Copy the provided code into a file named sample.sh.<br />
# Make sure the <code>gstd</code> daemon is running in the background on your system. This is necessary for the script to function correctly as it relies on gstd to manage the GStreamer pipeline.<br />
# Make the script executable con <code>chmod +x sample.sh</code>.<br />
# Execute the script by running <code>./sample.sh <path_to_your_image.jpg></code>.<br />
<br />
<pre><br />
./sample.sh sample.jpg <br />
</pre><br />
<br />
<syntaxhighlight lang=bash line><br />
#!/bin/bash<br />
<br />
if [ "$#" -ne 1 ]; then<br />
echo "Usage: $0 <path_to_image>"<br />
exit 1<br />
fi<br />
<br />
image_path="$1"<br />
counter=0<br />
loops=0<br />
duration=15 # Video duration in seconds<br />
frame_interval=0.02 # Time between frames in seconds<br />
total_frames=$(echo "scale=0; $duration / $frame_interval" | bc)<br />
<br />
gst-client pipeline_create p1 "filesrc location=${image_path} ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA,width=1920,height=1080 ! rrpanoramaptz name=ptz ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=./sample.ts"<br />
<br />
gst-client pipeline_play p1<br />
<br />
while [ $loops -lt $total_frames ]; do<br />
gst-client --quiet element_set p1 ptz pan ${counter}<br />
((counter++))<br />
if [ $counter -ge 360 ]; then<br />
counter=0<br />
fi<br />
((loops++))<br />
sleep $frame_interval<br />
done<br />
<br />
gst-client pipeline_stop p1<br />
gst-client pipeline_delete p1<br />
</syntaxhighlight><br />
<br />
You can play the video using:<br />
<pre><br />
gst-launch-1.0 filesrc location=./sample.ts ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/Gst-launch&diff=53700Spherical Video PTZ/Examples/Gst-launch2024-03-26T18:55:14Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Using System Memory==<br />
===Display===<br />
* '''Video Test Pattern Display:''' Generates a test pattern, applies PTZ (Pan, Tilt, Zoom) transformations, and adjusts the output size for display.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz zoom=2.1 ! "video/x-raw,width=1920,height=1080" ! queue ! videoconvert ! autovideosink<br />
</pre><br />
<br />
* '''Image Zoom and Display:''' Takes an equirectangular image, applies a zoom effect, and displays the result. Use this to showcase specific features of panoramic images.<br />
<pre><br />
gst-launch-1.0 filesrc location=$IMAGE_PATH ! jpegdec ! imagefreeze ! videoconvert ! video/x-raw,format=RGBA ! rrpanoramaptz zoom=1.5 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
===Recording===<br />
* '''Record PTZ-Transformed Video:''' Captures an input image, applies PTZ transformations, and encodes the output into a video file. This pipeline is useful for creating panoramic videos with dynamic perspectives.<br />
<pre><br />
gst-launch-1.0 filesrc location=$IMAGE_PATH ! jpegdec ! imagefreeze ! videoconvert ! videoscale ! video/x-raw,format=RGBA ,width=1920,height=1080 ! rrpanoramaptz zoom=1.5 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=$OUTPUT_PATH<br />
</pre><br />
To decode and view the video, use:<br />
<pre><br />
gst-launch-1.0 filesrc location=$OUTPUT_PATH ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
===Streaming===<br />
* '''Live Streaming with PTZ Controls:''' Streams a live video feed with PTZ controls, encoding the content in H.264 format for UDP transmission. Example of use for for real-time broadcasting of panoramic content.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=18 ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=$PORT host=$HOST_IP<br />
</pre><br />
<br />
Client setup for receiving the stream:<br />
<pre><br />
gst-launch-1.0 udpsrc port=$PORT_IP address=$HOST_IP ! queue ! tsdemux ! h264parse ! queue ! decodebin ! videoconvert ! fpsdisplaysink<br />
</pre><br />
<br />
==Using NVMM (GPU Acceleration)==<br />
<br />
===Recording===<br />
* '''GPU-Accelerated Video Recording:''' Encondes the video in H.264 format, and saving it to a file. <br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz zoom=1.5 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=$OUTPUT_PATH<br />
</pre><br />
To decode and view the video, use:<br />
<pre><br />
gst-launch-1.0 filesrc location=$OUTPUT_PATH ! tsdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! autovideosink<br />
</pre><br />
<br />
===Streaming===<br />
* '''GPU-Accelerated Video Streaming:''' Streams a live video feed with PTZ controls, encoding the content in H.264 format for UDP transmission.<br />
<pre><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=$PORT_IP host=$HOST_IP<br />
</pre><br />
<br />
Client setup for receiving the stream:<br />
<pre><br />
gst-launch-1.0 udpsrc port=$PORT_IP address=$HOST_IP ! queue ! tsdemux ! h264parse ! queue ! decodebin ! videoconvert ! fpsdisplaysink<br />
</pre><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Quick_Start_Guide&diff=53699Spherical Video PTZ/User Guide/Quick Start Guide2024-03-26T18:51:18Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Libpanorama==<br />
<br />
This wiki introduces a basic use of Spherical Video PTZ for converting equirectangular images to rectilinear format with an engine. It includes a simple example and instructions on how to use the engine for different needs. The engine makes it easy to change panoramic images into a straight view, useful for many projects.<br />
<br />
=== Minimal Application ===<br />
<br />
After [[Spherical Video PTZ/User Guide/Building and Installation|Building and Installation]], follow these steps:<br />
<br />
'''1.''' Download the sample images, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
./download_samples.sh<br />
</syntaxhighlight><br />
<br />
'''2.''' This example demonstrates the use of the Spherical Video PTZ engine to convert equirectangular images into rectilinear format. This command processes example_image.jpg, converting it from an equirectangular format to a rectilinear view. But you can use any other reference image as long as it is equirectangular. Run the example as:<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH/libpanorama<br />
./builddir/examples/equirectangular_to_rectilinear $SAMPLES/example_image.jpg <br />
</syntaxhighlight><br />
<br />
'''3.''' For this example you can use the interactive controls with the Spherical Video PTZ (Pan-Tilt-Zoom) for dynamic exploration of panoramic images. Hit the specified keys when the example is running:<br />
* Zoom In/Out: Adjust the zoom level to get a closer view or a wider perspective of the image.<br />
** In: <code>i</code><br />
** Out: <code>o</code><br />
* Pan Left/Right: Rotate the view horizontally to explore the left or right sides of the panoramic image.<br />
** Left: <code>4</code><br />
** Right: <code>6</code><br />
* Tilt Up/Down: Adjust the vertical angle of the camera to look up or down within the panoramic image.<br />
** Up: <code>8</code><br />
** Down: <code>2</code><br />
<br />
'''4.''' press the <code>Esc</code> key to exit the program.<br />
<br />
=== Spherical Video PTZ Engine ===<br />
<br />
The Video PTZ Engine simplifies the use of PTZ controls, making it easier to integrate into your code. To get started, add the following ''includes'' and ''namespaces'' to your code:<br />
<br />
<syntaxhighlight lang=cpp><br />
#include <iostream><br />
#include <lp/allocators/cudaimage.hpp><br />
#include <lp/engines/equirectangular_to_rectilinear.hpp><br />
#include <lp/image.hpp><br />
#include <lp/io/opencv.hpp><br />
#include <lp/rgba.hpp><br />
<br />
using namespace lp;<br />
using namespace lp::io;<br />
</syntaxhighlight><br />
<br />
<br />
Once this have been done, create the engine. To do so, instantiate the class as demonstrated:<br />
<syntaxhighlight lang=cpp><br />
lp::engines::EquirectangularToRectilinear<RGBA<uint8_t>> engine;<br />
</syntaxhighlight><br />
<br />
<br />
Next, configure the parameters to be manipulated using the properties of the Spherical Video PTZ. Instantiate them as showed in the code below. The initial parameters, represented as <code>{{0.0f, 0.0f}, 2.0f}</code>, control the panning, tilting, and zooming of the image in the format <code>{{pan, tilt}, zoom}</code>. The panning and tilting values can are valid in the range: <code>[-π,π]</code> while the zooming in the range: <code>[0.1, 10]</code>. Additionally, <code>{dst.GetSize()}</code> is used to define the output size in <code>{width, height}</code> format, while <code>{io.GetSize()}</code> is utilized to set the input size, also in <code>{width, height}</code> format.<br />
<br />
<syntaxhighlight lang=cpp><br />
engines::EquirectangularToRectilinearParams params{{<br />
{0.0f, 0.0f},<br />
2.0f,<br />
},<br />
{<br />
dst.GetSize(),<br />
},<br />
{io.GetSize()}};<br />
</syntaxhighlight><br />
<br />
<br />
Then, set the initial parameters with the '''SetParameters''' method:<br />
<br />
<syntaxhighlight lang=cpp><br />
lp::engine.SetParameters(params);<br />
</syntaxhighlight><br />
<br />
<br />
Finally, use the '''Process''' method with the input image as the first parameter. The second parameter will contain the result after applying the equirectangular to rectilinear projection transformation. Please note that the Engine supports both '''CudaImages''' and '''Images''' for processing. However, if you choose to use an Image, the Engine will internally allocate a Cuda buffer and copy the Image content into it, potentially affecting the application's performance. <br />
<br />
<syntaxhighlight lang=cpp><br />
lp::Image<RGBA<uint8_t>> img;<br />
lp::allocators::CudaImage<RGBA<uint8_t>> dst;<br />
<br />
lp::engine.Process(img, dst);<br />
</syntaxhighlight><br />
<br />
<br />
Consider the following pseudo-code snippet as an example of how to use the Engine in a loop:<br />
<br />
<syntaxhighlight lang=cpp line><br />
#include <iostream><br />
#include <lp/engines/equirectangular_to_rectilinear.hpp><br />
#include <lp/image.hpp><br />
#include <lp/io/opencv.hpp><br />
#include <lp/rgba.hpp><br />
<br />
using namespace lp;<br />
using namespace lp::io;<br />
<br />
int main(int argc, char **argv) {<br />
ImageSize size{500, 500}; /* size of the output image */<br />
const size_t rawsize = size.PixelCount();<br />
<br />
OpenCV<RGBA<uint8_t>> io.Open(image_path);<br />
Image<RGBA<uint8_t>> img = io.ReadImage();<br />
Image<RGBA<uint8_t>> dst =<br />
Image(size, std::shared_ptr<RGBA<uint8_t>[]>(new RGBA<uint8_t>[rawsize]));<br />
<br />
engines::EquirectangularToRectilinear<RGBA<uint8_t>> engine;<br />
engines::EquirectangularToRectilinearParams params{{<br />
{0.0f, 0.0f},<br />
2.0f,<br />
},<br />
{<br />
dst.GetSize(),<br />
},<br />
{io.GetSize()}};<br />
<br />
engine.SetParameters(params);<br />
<br />
for (int i = 1; i <= 100; i++) {<br />
/* Process image */<br />
engine.Process(img, dst);<br />
<br />
/* Do anything you want with dst (display, stream, save, ...) */<br />
<br />
/* Update parameters */<br />
params.equirectangular.r -= 0.05; /* zoom out */<br />
params.equirectangular.viewpoint + Point2D{0.05f, 0.00f}; /* tilt right */<br />
params.equirectangular.viewpoint + Point2D{0.00f, 0.05f}; /* pan up */<br />
engine.SetParameters(params);<br />
}<br />
}<br />
</syntaxhighlight><br />
<br />
==GstRrPanoramaptz==<br />
<br />
After [[Spherical Video PTZ/User Guide/Building and Installation|Building and Installation]], follow these steps:<br />
<br />
The GstRrPanoramaptz plugin allows for real-time PTZ adjustments on panoramic video feeds, enabling users to explore video scenes in greater detail or from different perspectives.<br />
<br />
===Overview===<br />
====Features====<br />
* '''CUDA-accelerated PTZ transformations:''' Leverage the power of NVIDIA CUDA technology. This acceleration helps with a smooth and high-performance video processing.<br />
* '''Support for RGBA video format.'''<br />
* '''Dynamic parameter adjustments:''' Users can dynamically adjust PTZ parameters such as pan, tilt, and zoom during playback, providing a versatile and interactive video experience.<br />
<br />
====Properties====<br />
The GstRrPanoramaptz plugin introduces three primary properties for real-time video manipulation:<br />
<br />
* Pan (Horizontal Rotation): Adjusts the video feed's horizontal orientation. Pan adjustments allow viewers to rotate the video around its vertical axis, simulating a left or right looking direction.<br />
** Syntax: <code>pan=<value></code>.<br />
** Range: <code>-360 to 360</code> degrees.<br />
** Default: <code>0</code>.<br />
<br />
* Tilt (Vertical Rotation): This property adjusts the vertical viewing angle of the video feed. It simulates a vertical rotation of the camera view.<br />
** Syntax: <code>tilt=<value></code>.<br />
** Range: <code>-360 to 360</code> degrees.<br />
** Default: <code>0</code>.<br />
<br />
* Zoom: This property adjusts the zoom level of the video feed. It simulates moving the camera closer or further away from the scene.<br />
** Syntax: <code>zoom=<value></code>.<br />
** Range: <code>0.1 to 10</code>.<br />
** Behavior: '''Zoom out''' for <code>zoom < 1</code>, '''Zoom in''' for <code>zoom > 1</code>.<br />
** Default: <code>1</code>.<br />
<br />
====Caps and Formats====<br />
* The plugin accepts and outputs video in the <code>video/x-raw</code> format, utilizing the RGBA color space. This support ensures compatibility with a wide range of video processing scenarios.<br />
* Enhanced performance on NVIDIA hardware is achieved through support for both system memory and NVMM (NVIDIA Multi-Media) memory inputs. This flexibility allows users to optimize their video processing pipelines based on the available hardware resources.<br />
<br />
====Basic use example====<br />
This pipeline creates a test video, then applies a 0.5-degree rotation to the right, tilts it upwards by 0.5 degrees, and enhances the view with a zoom level of 2.<br />
<br />
<pre><br />
gst-launch-1.0 videotestsrc ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz pan=0.5 tilt=0.5 zoom=2 ! videoconvert ! autovideosink<br />
</pre><br />
<br />
You should see an output as the one below:<br />
[[File:panoramaptz-example.png|thumbnail|center|840px|Libpanorama example]]<br />
The example uses a standard video, not a panoramic one, causing some distortion, but we'll explore distortion-free examples with equirectangular images soon.<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Package_set_up&diff=53660Getting Started with ROS on Embedded Systems/User Guide/C++/Package set up2024-03-23T16:55:15Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=C++ User Guide|next=User Guide/C++/Initialization|metakeywords=ROS}}<br />
<br />
__TOC__<br />
<br />
== Introduction ==<br />
This serves an introduction on how to create and build simple packages using ROS with colcon build system. <br />
<br />
A ROS package is simply one folder located under a workspace that can be constructed by using a package manager, for example colcon or catkin. This guide will use colcon build system.<br />
<br />
A package can be created using the ros2 command, like:<br />
<br />
<syntaxhighlight lang="bash"><br />
ros2 pkg create --license Apache-2.0 <pkg-name> --dependencies [deps]<br />
</syntaxhighlight><br />
<br />
A project setup for colcon generally will look like:<br />
<br />
<pre><br />
root@vision:/test# tree .<br />
.<br />
├── CMakeLists.txt<br />
├── include<br />
│ └── test<br />
├── LICENSE<br />
├── package.xml<br />
└── src<br />
<br />
3 directories, 3 files<br />
</pre><br />
<br />
To test our test package we can build it:<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /test<br />
colcon build<br />
</syntaxhighlight><br />
<br />
If all goes well, you should see:<br />
<br />
<syntaxhighlight lang="bash"><br />
root@vision:/test# colcon build<br />
Starting >>> test <br />
Finished <<< test [5.37s] <br />
<br />
Summary: 1 package finished [5.74s]d<br />
</syntaxhighlight><br />
<br />
=== Sample publisher ===<br />
We are going to make a simple text publisher and reciever, using the following [https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html guide].<br />
The first step on configuring the package, is to modify the package.xml, you will have entries similar to the following:<br />
<br />
<syntaxhighlight lang="xml"><br />
<?xml version="1.0"?><br />
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?><br />
<package format="3"><br />
<name>test</name><br />
<version>0.0.0</version><br />
<description>TODO: Package description</description><br />
<maintainer email="root@todo.todo">root</maintainer><br />
<license>Apache-2.0</license><br />
<br />
<buildtool_depend>ament_cmake</buildtool_depend><br />
<br />
<test_depend>ament_lint_auto</test_depend>executables<br />
<test_depend>ament_lint_common</test_depend><br />
<br />
<export><br />
<build_type>ament_cmake</build_type><br />
</export><br />
</package><br />
</syntaxhighlight><br />
<br />
Now replace the TODO's with your info. Then add the following dependencies:<br />
<br />
<syntaxhighlight lang="xml"><br />
<depend>rclcpp</depend><br />
<depend>std_msgs</depend><br />
</syntaxhighlight><br />
<br />
Now lets add the publisher sources, create the file "src/sample_pub.cpp" and add the following:<br />
<br />
<syntaxhighlight lang="c++"><br />
#include <chrono><br />
#include <functional><br />
#include <memory><br />
#include <string><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
<br />
using namespace std::chrono_literals;<br />
<br />
/* This example creates a subclass of Node and uses std::bind() to register a<br />
* member function as a callback from the timer. */<br />
<br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher()<br />
: Node("minimal_publisher"), count_(0)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);<br />
timer_ = this->create_wall_timer(<br />
500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The code has 3 main sections:<br />
* ROS initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::init(argc, argv);<br />
</syntaxhighlight><br />
<br />
* Node declaration, where we declare our ros node, that extends from the base rclcpp::Node class.<br />
<br />
* Main thread initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
</syntaxhighlight><br />
Now we can add our listener, create the file "src/sample_listener.cpp" and add the following:<br />
<syntaxhighlight lang="c++"><br />
#include <memory><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
using std::placeholders::_1;<br />
<br />
class MinimalSubscriber : public rclcpp::Node<br />
{<br />
public:<br />
MinimalSubscriber()<br />
: Node("minimal_subscriber")<br />
{<br />
subscription_ = this->create_subscription<std_msgs::msg::String>(<br />
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
}<br />
<br />
private:<br />
void topic_callback(const std_msgs::msg::String & msg) const<br />
{<br />
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg.data.c_str());<br />
}<br />
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalSubscriber>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
We have a similar structure to the publisher code, with the main three sections, but the node is set to listen to a topic, instead of writing to it with:<br />
<br />
<syntaxhighlight lang="c++"><br />
create_subscription<std_msgs::msg::String>("topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
</syntaxhighlight><br />
<br />
Where the node will listen to "topic".<br />
</br><br />
After that we can also take a look at the CMakeList.txt file:<br />
<br />
<syntaxhighlight lang="cmake"><br />
cmake_minimum_required(VERSION 3.8)<br />
project(test)<br />
<br />
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")<br />
add_compile_options(-Wall -Wextra -Wpedantic)<br />
endif()<br />
<br />
# find dependencies<br />
find_package(ament_cmake REQUIRED)<br />
# uncomment the following section in order to fill in<br />
# further dependencies manually.<br />
# find_package(<dependency> REQUIRED)<br />
<br />
if(BUILD_TESTING)<br />
find_package(ament_lint_auto REQUIRED)<br />
# the following line skips the linter which checks for copyrights<br />
# comment the line when a copyright and license is added to all source files<br />
set(ament_cmake_copyright_FOUND TRUE)<br />
# the following line skips cpplint (only works in a git repo)<br />
# comment the line when this package is in a git repo and when<br />
# a copyright and license is added to all source files<br />
set(ament_cmake_cpplint_FOUND TRUE)<br />
ament_lint_auto_find_test_dependencies()<br />
endif()<br />
<br />
ament_package()<br />
<br />
</syntaxhighlight><br />
<br />
First we add the dependencies on the dependencies section:<br />
<br />
<syntaxhighlight lang="cmake"><br />
find_package(rclcpp REQUIRED)<br />
find_package(std_msgs REQUIRED)<br />
</syntaxhighlight><br />
<br />
Now we can add our sources and set the executable targets, after the find_packages statements:<br />
<br />
<syntaxhighlight lang="cmake"><br />
# publiser code<br />
add_executable(talker src/publisher_member_function.cpp)<br />
ament_target_dependencies(talker rclcpp std_msgs)<br />
# listener code<br />
add_executable(listener src/sample_listener.cpp)<br />
ament_target_dependencies(listener rclcpp std_msgs)<br />
</syntaxhighlight><br />
<br />
Now we can fetch the dependencies:<br />
<br />
<syntaxhighlight lang="bash"><br />
rosdep install -i --from-path src --rosdistro humble -y<br />
</syntaxhighlight><br />
<br />
After that finishes, we can compile it:<br />
<br />
<syntaxhighlight lang="bash"><br />
colcon build<br />
</syntaxhighlight><br />
<br />
Now we can either use the executable directly on the build/test folder or install them using the install script like ". install/setup.bash".<br />
</br><br />
After that we can open two terminals and execute one executable on each, remember to do "source /ros_entrypoint.sh" on every new terminal that is opened. If we run the talker on one and the listener on the other we will see something like:<br />
<br />
[[File:Ros2 simple example.png|thumbnail|center|780px|alt=Simple listener and publisher example]]<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|C++ User Guide|User Guide/C++/Initialization}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Initialization&diff=53659Getting Started with ROS on Embedded Systems/User Guide/C++/Initialization2024-03-23T16:51:40Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=User Guide/C++/Package set up|next=User Guide/C++/Topics|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
Based on our previous example this section will try to cover the initialization part of the code.<br />
<br />
== Initialization ==<br />
On every ROS2 code we need to call ros and initialize it:<br />
<br />
<syntaxhighlight lang="c++"><br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
...<br />
</syntaxhighlight><br />
<br />
== Starting ==<br />
<br />
After initialization we actually need to create a node, a basic empty node:<br />
<br />
<syntaxhighlight lang="c++"><br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher() : Node(NODE_NAME), count_(0)<br />
{<br />
publisher_ = this->create_publisher<MSG_TYPE>(TOPIC, PUB_QUEUE);<br />
}<br />
<br />
private:<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
};<br />
</syntaxhighlight><br />
<br />
* NODE_NAME, the name of the node, it helps to locate it using the ros logger or using the ros cmdline tools.<br />
* TOPIC, the actual topic name where the node will publish its messages.<br />
* PUB_QUEUE, lenght of the publiser queue.<br />
* MSG_TYPE, the data type for the ros messages.<br />
<br />
In this case we have a simple timer base publisher that publishes an increasing counter every 500ms, but most importantly this is the line needed to publish data:<br />
<br />
<syntaxhighlight lang="c++"><br />
publisher_->publish(message);<br />
</syntaxhighlight><br />
<br />
== Running the node ==<br />
ROS executables have a main loop, and to actually run anything, we need to attach a node to it, to do so we have the following apis:<br />
<br />
* rclcpp::spin(std::make_shared<MinimalPublisher>()): run forever.<br />
* rclcpp::spin_some(std::make_shared<MinimalPublisher>()): run once and exit.<br />
* rclcpp::spin_until_future_complete: checks for a condition to be met, and after that condition is true it stops and exits.<br />
<br />
== Running multiples nodes ==<br />
<br />
It's possible to run multiple nodes on the same executable, to do so you can use a MultiThreadedExecutor and add N nodes to it like:<br />
<br />
<syntaxhighlight lang="c++"><br />
rclcpp::executors::MultiThreadedExecutor executor;<br />
</syntaxhighlight><br />
<br />
To do so, we can use the following code for a dual node publisher example:<br />
<br />
<syntaxhighlight lang="c++"><br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher(std::string node_name, std::string topic_name) : Node(node_name)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>(topic_name, 10);<br />
timer_ = this->create_wall_timer(500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
count_ = 0;<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::executors::MultiThreadedExecutor executor;<br />
auto node1 = std::make_shared<MinimalPublisher>("PUB_1", "TOPIC_1");<br />
auto node2 = std::make_shared<MinimalPublisher>("PUB_2", "TOPIC_2");<br />
executor.add_node(node1);<br />
executor.add_node(node2);<br />
<br />
executor.spin();<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
And when executing we can see both nodes sending data:<br />
<br />
<syntaxhighlight lang="c++"><br />
^C[INFO] [1710200815.973955444] [rclcpp]: signal_handler(signum=2)<br />
root@vision:/test/build/test# ./talker <br />
[INFO] [1710200998.614902468] [PUB_1]: Publishing: 'Hello, world! 0'<br />
[INFO] [1710200998.623128763] [PUB_2]: Publishing: 'Hello, world! 0'<br />
[INFO] [1710200999.114898206] [PUB_1]: Publishing: 'Hello, world! 1'<br />
[INFO] [1710200999.123280536] [PUB_2]: Publishing: 'Hello, world! 1'<br />
[INFO] [1710200999.614873945] [PUB_1]: Publishing: 'Hello, world! 2'<br />
[INFO] [1710200999.623183985] [PUB_2]: Publishing: 'Hello, world! 2'<br />
[INFO] [1710201000.114919412] [PUB_1]: Publishing: 'Hello, world! 3'<br />
</syntaxhighlight><br />
<br />
If needed we can mix publisher and receiver nodes on the same code, using the same approach.<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|User Guide/C++/Package set up|User Guide/C++/Topics}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Package_set_up&diff=53658Getting Started with ROS on Embedded Systems/User Guide/C++/Package set up2024-03-23T16:46:29Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=C++ User Guide|next=User Guide/C++/Initialization|metakeywords=ROS}}<br />
<br />
__TOC__<br />
<br />
== Introduction ==<br />
This serves an introduction on how to create and build simple packages using ROS with colcon build system. <br />
<br />
A ROS package is simply one folder located under a workspace that can be constructed by using a package manager, for example colcon or catkin. This guide will use colcon build system.<br />
<br />
A package can be created using the ros2 command, like:<br />
<br />
<syntaxhighlight lang=bash><br />
ros2 pkg create --license Apache-2.0 <pkg-name> --dependencies [deps]<br />
</syntaxhighlight><br />
<br />
A project setup for colcon generally will look like:<br />
<br />
<pre><br />
root@vision:/test# tree .<br />
.<br />
├── CMakeLists.txt<br />
├── include<br />
│ └── test<br />
├── LICENSE<br />
├── package.xml<br />
└── src<br />
<br />
3 directories, 3 files<br />
</pre><br />
<br />
To test our test package we can build it:<br />
<br />
<syntaxhighlight lang=bash><br />
cd /test<br />
colcon build<br />
</syntaxhighlight><br />
<br />
If all goes well, you should see:<br />
<br />
<syntaxhighlight lang=bash><br />
root@vision:/test# colcon build<br />
Starting >>> test <br />
Finished <<< test [5.37s] <br />
<br />
Summary: 1 package finished [5.74s]d<br />
</syntaxhighlight><br />
<br />
=== Sample publisher ===<br />
We are going to make a simple text publisher and reciever, using the following [https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html guide].<br />
The first step on configuring the package, is to modify the package.xml, you will have entries similar to the following:<br />
<br />
<syntaxhighlight lang="xml"><br />
<?xml version="1.0"?><br />
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?><br />
<package format="3"><br />
<name>test</name><br />
<version>0.0.0</version><br />
<description>TODO: Package description</description><br />
<maintainer email="root@todo.todo">root</maintainer><br />
<license>Apache-2.0</license><br />
<br />
<buildtool_depend>ament_cmake</buildtool_depend><br />
<br />
<test_depend>ament_lint_auto</test_depend>executables<br />
<test_depend>ament_lint_common</test_depend><br />
<br />
<export><br />
<build_type>ament_cmake</build_type><br />
</export><br />
</package><br />
</syntaxhighlight><br />
<br />
Now replace the TODO's with your info. Then add the following dependencies:<br />
<br />
<syntaxhighlight lang="xml"><br />
<depend>rclcpp</depend><br />
<depend>std_msgs</depend><br />
</syntaxhighlight><br />
<br />
Now lets add the publisher sources, create the file "src/sample_pub.cpp" and add the following:<br />
<br />
<syntaxhighlight lang="c++"><br />
#include <chrono><br />
#include <functional><br />
#include <memory><br />
#include <string><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
<br />
using namespace std::chrono_literals;<br />
<br />
/* This example creates a subclass of Node and uses std::bind() to register a<br />
* member function as a callback from the timer. */<br />
<br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher()<br />
: Node("minimal_publisher"), count_(0)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);<br />
timer_ = this->create_wall_timer(<br />
500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The code has 3 main sections:<br />
* ROS initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::init(argc, argv);<br />
</syntaxhighlight><br />
<br />
* Node declaration, where we declare our ros node, that extends from the base rclcpp::Node class.<br />
<br />
* Main thread initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
</syntaxhighlight><br />
Now we can add our listener, create the file "src/sample_listener.cpp" and add the following:<br />
<syntaxhighlight lang="c++"><br />
#include <memory><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
using std::placeholders::_1;<br />
<br />
class MinimalSubscriber : public rclcpp::Node<br />
{<br />
public:<br />
MinimalSubscriber()<br />
: Node("minimal_subscriber")<br />
{<br />
subscription_ = this->create_subscription<std_msgs::msg::String>(<br />
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
}<br />
<br />
private:<br />
void topic_callback(const std_msgs::msg::String & msg) const<br />
{<br />
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg.data.c_str());<br />
}<br />
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalSubscriber>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
We have a similar structure to the publisher code, with the main three sections, but the node is set to listen to a topic, instead of writing to it with:<br />
<br />
<syntaxhighlight lang="c++"><br />
create_subscription<std_msgs::msg::String>("topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
</syntaxhighlight><br />
<br />
Where the node will listen to "topic".<br />
</br><br />
After that we can also take a look at the CMakeList.txt file:<br />
<br />
<syntaxhighlight lang="cmake"><br />
cmake_minimum_required(VERSION 3.8)<br />
project(test)<br />
<br />
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")<br />
add_compile_options(-Wall -Wextra -Wpedantic)<br />
endif()<br />
<br />
# find dependencies<br />
find_package(ament_cmake REQUIRED)<br />
# uncomment the following section in order to fill in<br />
# further dependencies manually.<br />
# find_package(<dependency> REQUIRED)<br />
<br />
if(BUILD_TESTING)<br />
find_package(ament_lint_auto REQUIRED)<br />
# the following line skips the linter which checks for copyrights<br />
# comment the line when a copyright and license is added to all source files<br />
set(ament_cmake_copyright_FOUND TRUE)<br />
# the following line skips cpplint (only works in a git repo)<br />
# comment the line when this package is in a git repo and when<br />
# a copyright and license is added to all source files<br />
set(ament_cmake_cpplint_FOUND TRUE)<br />
ament_lint_auto_find_test_dependencies()<br />
endif()<br />
<br />
ament_package()<br />
<br />
</syntaxhighlight><br />
<br />
First we add the dependencies on the dependencies section:<br />
<br />
<syntaxhighlight lang="cmake"><br />
find_package(rclcpp REQUIRED)<br />
find_package(std_msgs REQUIRED)<br />
</syntaxhighlight><br />
<br />
Now we can add our sources and set the executable targets, after the find_packages statements:<br />
<br />
<syntaxhighlight lang="cmake"><br />
# publiser code<br />
add_executable(talker src/publisher_member_function.cpp)<br />
ament_target_dependencies(talker rclcpp std_msgs)<br />
# listener code<br />
add_executable(listener src/sample_listener.cpp)<br />
ament_target_dependencies(listener rclcpp std_msgs)<br />
</syntaxhighlight><br />
<br />
Now we can fetch the dependencies:<br />
<br />
<syntaxhighlight lang=bash><br />
rosdep install -i --from-path src --rosdistro humble -y<br />
</syntaxhighlight><br />
<br />
After that finishes, we can compile it:<br />
<br />
<syntaxhighlight lang=bash><br />
colcon build<br />
</syntaxhighlight><br />
<br />
Now we can either use the executable directly on the build/test folder or install them using the install script like ". install/setup.bash".<br />
</br><br />
After that we can open two terminals and execute one executable on each, remember to do "source /ros_entrypoint.sh" on every new terminal that is opened. If we run the talker on one and the listener on the other we will see something like:<br />
<br />
[[File:Ros2 simple example.png|thumbnail|center|780px|alt=Simple listener and publisher example]]<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|C++ User Guide|User Guide/C++/Initialization}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Package_set_up&diff=53657Getting Started with ROS on Embedded Systems/User Guide/C++/Package set up2024-03-23T16:42:31Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=C++ User Guide|next=User Guide/C++/Initialization|metakeywords=ROS}}<br />
<br />
__TOC__<br />
<br />
== Introduction ==<br />
This serves an introduction on how to create and build simple packages using ROS with colcon build system. <br />
<br />
A ROS package is simply one folder located under a workspace that can be constructed by using a package manager, for example colcon or catkin. This guide will use colcon build system.<br />
<br />
A package can be created using the ros2 command, like:<br />
<br />
<syntaxhighlight lang=bash ><br />
ros2 pkg create --license Apache-2.0 <pkg-name> --dependencies [deps]<br />
</syntaxhighlight><br />
<br />
A project setup for colcon generally will look like:<br />
<br />
<pre><br />
root@vision:/test# tree .<br />
.<br />
├── CMakeLists.txt<br />
├── include<br />
│ └── test<br />
├── LICENSE<br />
├── package.xml<br />
└── src<br />
<br />
3 directories, 3 files<br />
</pre><br />
<br />
To test our test package we can build it:<br />
<br />
<syntaxhighlight lang=bash><br />
cd /test<br />
colcon build<br />
</syntaxhighlight><br />
<br />
If all goes well, you should see:<br />
<br />
<syntaxhighlight lang=bash ><br />
root@vision:/test# colcon build<br />
Starting >>> test <br />
Finished <<< test [5.37s] <br />
<br />
Summary: 1 package finished [5.74s]d<br />
</syntaxhighlight><br />
<br />
=== Sample publisher ===<br />
We are going to make a simple text publisher and reciever, using the following [https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html guide].<br />
The first step on configuring the package, is to modify the package.xml, you will have entries similar to the following:<br />
<br />
<syntaxhighlight lang="xml"><br />
<?xml version="1.0"?><br />
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?><br />
<package format="3"><br />
<name>test</name><br />
<version>0.0.0</version><br />
<description>TODO: Package description</description><br />
<maintainer email="root@todo.todo">root</maintainer><br />
<license>Apache-2.0</license><br />
<br />
<buildtool_depend>ament_cmake</buildtool_depend><br />
<br />
<test_depend>ament_lint_auto</test_depend><br />
<test_depend>ament_lint_common</test_depend><br />
<br />
<export><br />
<build_type>ament_cmake</build_type><br />
</export><br />
</package><br />
</syntaxhighlight><br />
<br />
Now replace the TODO's with your info. Then add the following dependencies:<br />
<br />
<syntaxhighlight lang="xml"><br />
<depend>rclcpp</depend><br />
<depend>std_msgs</depend><br />
</syntaxhighlight><br />
<br />
Now lets add the publisher sources, create the file "src/sample_pub.cpp" and add the following:<br />
<br />
<syntaxhighlight lang="c++"><br />
#include <chrono><br />
#include <functional><br />
#include <memory><br />
#include <string><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
<br />
using namespace std::chrono_literals;<br />
<br />
/* This example creates a subclass of Node and uses std::bind() to register a<br />
* member function as a callback from the timer. */<br />
<br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher()<br />
: Node("minimal_publisher"), count_(0)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);<br />
timer_ = this->create_wall_timer(<br />
500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The code has 3 main sections:<br />
* ROS initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::init(argc, argv);<br />
</syntaxhighlight><br />
<br />
* Node declaration, where we declare our ros node, that extends from the base rclcpp::Node class.<br />
<br />
* Main thread initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
</syntaxhighlight><br />
Now we can add our listener, create the file "src/sample_listener.cpp" and add the following:<br />
<syntaxhighlight lang="c++"><br />
#include <memory><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
using std::placeholders::_1;<br />
<br />
class MinimalSubscriber : public rclcpp::Node<br />
{<br />
public:<br />
MinimalSubscriber()<br />
: Node("minimal_subscriber")<br />
{<br />
subscription_ = this->create_subscription<std_msgs::msg::String>(<br />
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
}<br />
<br />
private:<br />
void topic_callback(const std_msgs::msg::String & msg) const<br />
{<br />
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg.data.c_str());<br />
}<br />
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalSubscriber>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
We have a similar structure to the publisher code, with the main three sections, but the node is set to listen to a topic, instead of writing to it with:<br />
<br />
<syntaxhighlight lang="c++"><br />
create_subscription<std_msgs::msg::String>("topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
</syntaxhighlight><br />
<br />
Where the node will listen to "topic".<br />
</br><br />
After that we can also take a look at the CMakeList.txt file:<br />
<br />
<syntaxhighlight lang="cmake"><br />
cmake_minimum_required(VERSION 3.8)<br />
project(test)<br />
<br />
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")<br />
add_compile_options(-Wall -Wextra -Wpedantic)<br />
endif()<br />
<br />
# find dependencies<br />
find_package(ament_cmake REQUIRED)<br />
# uncomment the following section in order to fill in<br />
# further dependencies manually.<br />
# find_package(<dependency> REQUIRED)<br />
<br />
if(BUILD_TESTING)<br />
find_package(ament_lint_auto REQUIRED)<br />
# the following line skips the linter which checks for copyrights<br />
# comment the line when a copyright and license is added to all source files<br />
set(ament_cmake_copyright_FOUND TRUE)<br />
# the following line skips cpplint (only works in a git repo)<br />
# comment the line when this package is in a git repo and when<br />
# a copyright and license is added to all source files<br />
set(ament_cmake_cpplint_FOUND TRUE)<br />
ament_lint_auto_find_test_dependencies()<br />
endif()<br />
<br />
ament_package()<br />
<br />
</syntaxhighlight><br />
<br />
First we add the dependencies on the dependencies section:<br />
<br />
<syntaxhighlight lang="cmake"><br />
find_package(rclcpp REQUIRED)<br />
find_package(std_msgs REQUIRED)<br />
</syntaxhighlight><br />
<br />
Now we can add our sources and set the executable targets, after the find_packages statements:<br />
<br />
<syntaxhighlight lang="cmake"><br />
# publiser code<br />
add_executable(talker src/publisher_member_function.cpp)<br />
ament_target_dependencies(talker rclcpp std_msgs)<br />
# listener code<br />
add_executable(listener src/sample_listener.cpp)<br />
ament_target_dependencies(listener rclcpp std_msgs)<br />
</syntaxhighlight><br />
<br />
Now we can fetch the dependencies:<br />
<br />
<syntaxhighlight lang=bash><br />
rosdep install -i --from-path src --rosdistro humble -y<br />
</syntaxhighlight><br />
<br />
After that finishes, we can compile it:<br />
<br />
<syntaxhighlight lang=bash><br />
colcon build<br />
</syntaxhighlight><br />
<br />
Now we can either use the executable directly on the build/test folder or install them using the install script like ". install/setup.bash".<br />
</br><br />
After that we can open two terminals and execute one executable on each, remember to do "source /ros_entrypoint.sh" on every new terminal that is opened. If we run the talker on one and the listener on the other we will see something like:<br />
<br />
[[File:Ros2 simple example.png|thumbnail|center|780px|alt=Simple listener and publisher example]]<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|C++ User Guide|User Guide/C++/Initialization}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Getting_Started_with_ROS_on_Embedded_Systems/User_Guide/C%2B%2B/Package_set_up&diff=53656Getting Started with ROS on Embedded Systems/User Guide/C++/Package set up2024-03-23T16:27:27Z<p>Spalli: </p>
<hr />
<div>{{Getting Started with ROS on Embedded Systems/Head|previous=C++ User Guide|next=User Guide/C++/Initialization|metakeywords=ROS}}<br />
<br />
== Introduction ==<br />
This serves an introduction on how to create and build simple packages using ROS with colcon build system. <br />
<br />
A ROS package is simply one folder located under a workspace that can be constructed by using a package manager, for example colcon or catkin. This guide will use colcon build system.<br />
<br />
A package can be created using the ros2 command, like:<br />
<br />
<syntaxhighlight lang=bash ><br />
ros2 pkg create --license Apache-2.0 <pkg-name> --dependencies [deps]<br />
</syntaxhighlight><br />
<br />
A project setup for colcon generally will look like:<br />
<br />
<pre><br />
root@vision:/test# tree .<br />
.<br />
├── CMakeLists.txt<br />
├── include<br />
│ └── test<br />
├── LICENSE<br />
├── package.xml<br />
└── src<br />
<br />
3 directories, 3 files<br />
</pre><br />
<br />
To test our test package we can build it:<br />
<br />
<syntaxhighlight lang=bash><br />
cd /test<br />
colcon build<br />
</syntaxhighlight><br />
<br />
If all goes well, you should see:<br />
<br />
<syntaxhighlight lang=bash ><br />
root@vision:/test# colcon build<br />
Starting >>> test <br />
Finished <<< test [5.37s] <br />
<br />
Summary: 1 package finished [5.74s]d<br />
</syntaxhighlight><br />
<br />
=== Sample publisher ===<br />
We are going to make a simple text publisher and reciever, using the following [https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html guide].<br />
The first step on configuring the package, is to modify the package.xml, you will have entries similar to the following:<br />
<br />
<syntaxhighlight lang="xml"><br />
<?xml version="1.0"?><br />
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?><br />
<package format="3"><br />
<name>test</name><br />
<version>0.0.0</version><br />
<description>TODO: Package description</description><br />
<maintainer email="root@todo.todo">root</maintainer><br />
<license>Apache-2.0</license><br />
<br />
<buildtool_depend>ament_cmake</buildtool_depend><br />
<br />
<test_depend>ament_lint_auto</test_depend><br />
<test_depend>ament_lint_common</test_depend><br />
<br />
<export><br />
<build_type>ament_cmake</build_type><br />
</export><br />
</package><br />
</syntaxhighlight><br />
<br />
Now replace the TODO's with your info. Then add the following dependencies:<br />
<br />
<syntaxhighlight lang="xml"><br />
<depend>rclcpp</depend><br />
<depend>std_msgs</depend><br />
</syntaxhighlight><br />
<br />
Now lets add the publisher sources, create the file "src/sample_pub.cpp" and add the following:<br />
<br />
<syntaxhighlight lang="c++"><br />
#include <chrono><br />
#include <functional><br />
#include <memory><br />
#include <string><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
<br />
using namespace std::chrono_literals;<br />
<br />
/* This example creates a subclass of Node and uses std::bind() to register a<br />
* member function as a callback from the timer. */<br />
<br />
class MinimalPublisher : public rclcpp::Node<br />
{<br />
public:<br />
MinimalPublisher()<br />
: Node("minimal_publisher"), count_(0)<br />
{<br />
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);<br />
timer_ = this->create_wall_timer(<br />
500ms, std::bind(&MinimalPublisher::timer_callback, this));<br />
}<br />
<br />
private:<br />
void timer_callback()<br />
{<br />
auto message = std_msgs::msg::String();<br />
message.data = "Hello, world! " + std::to_string(count_++);<br />
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());<br />
publisher_->publish(message);<br />
}<br />
rclcpp::TimerBase::SharedPtr timer_;<br />
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;<br />
size_t count_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The code has 3 main sections:<br />
* ROS initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::init(argc, argv);<br />
</syntaxhighlight><br />
<br />
* Node declaration, where we declare our ros node, that extends from the base rclcpp::Node class.<br />
<br />
* Main thread initialization:<br />
<syntaxhighlight lang="c++"><br />
rclcpp::spin(std::make_shared<MinimalPublisher>());<br />
</syntaxhighlight><br />
Now we can add our listener, create the file "src/sample_listener.cpp" and add the following:<br />
<syntaxhighlight lang="c++"><br />
#include <memory><br />
<br />
#include "rclcpp/rclcpp.hpp"<br />
#include "std_msgs/msg/string.hpp"<br />
using std::placeholders::_1;<br />
<br />
class MinimalSubscriber : public rclcpp::Node<br />
{<br />
public:<br />
MinimalSubscriber()<br />
: Node("minimal_subscriber")<br />
{<br />
subscription_ = this->create_subscription<std_msgs::msg::String>(<br />
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
}<br />
<br />
private:<br />
void topic_callback(const std_msgs::msg::String & msg) const<br />
{<br />
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg.data.c_str());<br />
}<br />
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;<br />
};<br />
<br />
int main(int argc, char * argv[])<br />
{<br />
rclcpp::init(argc, argv);<br />
rclcpp::spin(std::make_shared<MinimalSubscriber>());<br />
rclcpp::shutdown();<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
We have a similar structure to the publisher code, with the main three sections, but the node is set to listen to a topic, instead of writing to it with:<br />
<br />
<syntaxhighlight lang="c++"><br />
create_subscription<std_msgs::msg::String>("topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));<br />
</syntaxhighlight><br />
<br />
Where the node will listen to "topic".<br />
</br><br />
After that we can also take a look at the CMakeList.txt file:<br />
<br />
<syntaxhighlight lang="cmake"><br />
cmake_minimum_required(VERSION 3.8)<br />
project(test)<br />
<br />
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")<br />
add_compile_options(-Wall -Wextra -Wpedantic)<br />
endif()<br />
<br />
# find dependencies<br />
find_package(ament_cmake REQUIRED)<br />
# uncomment the following section in order to fill in<br />
# further dependencies manually.<br />
# find_package(<dependency> REQUIRED)<br />
<br />
if(BUILD_TESTING)<br />
find_package(ament_lint_auto REQUIRED)<br />
# the following line skips the linter which checks for copyrights<br />
# comment the line when a copyright and license is added to all source files<br />
set(ament_cmake_copyright_FOUND TRUE)<br />
# the following line skips cpplint (only works in a git repo)<br />
# comment the line when this package is in a git repo and when<br />
# a copyright and license is added to all source files<br />
set(ament_cmake_cpplint_FOUND TRUE)<br />
ament_lint_auto_find_test_dependencies()<br />
endif()<br />
<br />
ament_package()<br />
<br />
</syntaxhighlight><br />
<br />
First we add the dependencies on the dependencies section:<br />
<br />
<syntaxhighlight lang="cmake"><br />
find_package(rclcpp REQUIRED)<br />
find_package(std_msgs REQUIRED)<br />
</syntaxhighlight><br />
<br />
Now we can add our sources and set the executable targets, after the find_packages statements:<br />
<br />
<syntaxhighlight lang="cmake"><br />
# publiser code<br />
add_executable(talker src/publisher_member_function.cpp)<br />
ament_target_dependencies(talker rclcpp std_msgs)<br />
# listener code<br />
add_executable(listener src/sample_listener.cpp)<br />
ament_target_dependencies(listener rclcpp std_msgs)<br />
</syntaxhighlight><br />
<br />
Now we can fetch the dependencies:<br />
<br />
<syntaxhighlight lang=bash><br />
rosdep install -i --from-path src --rosdistro humble -y<br />
</syntaxhighlight><br />
<br />
After that finishes, we can compile it:<br />
<br />
<syntaxhighlight lang=bash><br />
colcon build<br />
</syntaxhighlight><br />
<br />
Now we can either use the executable directly on the build/test folder or install them using the install script like ". install/setup.bash".<br />
</br><br />
After that we can open two terminals and execute one executable on each, remember to do "source /ros_entrypoint.sh" on every new terminal that is opened. If we run the talker on one and the listener on the other we will see something like:<br />
<br />
[[File:Ros2 simple example.png|thumbnail|center|780px|alt=Simple listener and publisher example]]<br />
<br />
{{Getting Started with ROS on Embedded Systems/Foot|C++ User Guide|User Guide/C++/Initialization}}</div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples&diff=53641Spherical Video PTZ/Examples2024-03-22T16:07:56Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br><br />
<br><br />
==Examples==<br />
This wiki provides serves as a guide on how to evaluate the Spherical Video PTZ applications.<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Quick_Start_Guide&diff=53640Spherical Video PTZ/User Guide/Quick Start Guide2024-03-22T16:07:02Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Libpanorama==<br />
<br />
This wiki introduces a basic use of Spherical Video PTZ for converting equirectangular images to rectilinear format with an engine. It includes a simple example and instructions on how to use the engine for different needs. The engine makes it easy to change panoramic images into a straight view, useful for many projects.<br />
<br />
=== Minimal Application ===<br />
<br />
After [[Spherical Video PTZ/User Guide/Building and Installation|Building and Installation]], follow these steps:<br />
<br />
'''1.''' Download the sample images, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
./download_samples.sh<br />
</syntaxhighlight><br />
<br />
'''2.''' This example demonstrates the use of the Spherical Video PTZ engine to convert equirectangular images into rectilinear format. This command processes example_image.jpg, converting it from an equirectangular format to a rectilinear view. But you can use any other reference image as long as it is equirectangular. Run the example as:<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH<br />
./builddir/examples/equirectangular_to_rectilinear_npp $SAMPLES/example_image.jpg <br />
</syntaxhighlight><br />
<br />
'''3.''' For this example you can use the interactive controls with the Spherical Video PTZ (Pan-Tilt-Zoom) for dynamic exploration of panoramic images. Hit the specified keys when the example is running:<br />
* Zoom In/Out: Adjust the zoom level to get a closer view or a wider perspective of the image.<br />
** In: <code>i</code><br />
** Out: <code>o</code><br />
* Pan Left/Right: Rotate the view horizontally to explore the left or right sides of the panoramic image.<br />
** Left: <code>4</code><br />
** Right: <code>6</code><br />
* Tilt Up/Down: Adjust the vertical angle of the camera to look up or down within the panoramic image.<br />
** Up: <code>8</code><br />
** Down: <code>2</code><br />
<br />
=== Spherical Video PTZ Engine ===<br />
''Description of how to use the engine''<br />
<br />
<br />
==GstRrPanoramaptz==<br />
The GstRrPanoramaptz plugin allows for real-time PTZ adjustments on panoramic video feeds, enabling users to explore video scenes in greater detail or from different perspectives.<br />
<br />
===Overview===<br />
====Features====<br />
* CUDA-accelerated PTZ transformations.<br />
* Support for RGBA video format.<br />
* Dynamic parameter adjustments for pan, tilt, and zoom.<br />
<br />
====Properties====<br />
* Rotate the video on its horizontal axis. Range: -360 to 360 degrees. Default: 0.<br />
* Rotate the video on its vertical axis. Range: -360 to 360 degrees. Default: 0.<br />
* Dynamic parameter adjustments for pan, tilt, and zoom. ''WIP''<br />
<br />
====Caps and Formats====<br />
* Accepts and outputs video in video/x-raw format with RGBA color space.<br />
* Supports both system memory and NVMM memory inputs for enhanced performance on NVIDIA hardware.<br />
<br />
====Basic use example====<br />
To pan a test video source 90 degrees, you can use the following pipeline:<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc ! rrpanoramaptz pan=90 ! fakesink<br />
</syntaxhighlight><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Building_and_Installation&diff=53638Spherical Video PTZ/User Guide/Building and Installation2024-03-22T16:04:43Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Lipanorama==<br />
<br />
This wiki shows how to build the source code. It assumes you have already purchased a license and received access to the source code. If not, head to [[Birds Eye View/Getting Started/How to Get the Code|How to Get the Code]] for instructions on how to proceed.<br />
<br />
=== Install the Dependencies ===<br />
<br />
Before anything, ensure you have installed the following dependencies:<br />
<br />
* '''Git''': To clone the repository.<br />
* '''Meson''': To configure the project.<br />
* '''Ninja''': To build the project.<br />
* '''JsonCPP dev files''': For the parameter loading.<br />
* '''OpenCV dev files''': For panoramaptz Gstreamer element.<br />
* '''GstCUDA''': ''(optional)'' to download and unpack the sample images.<br />
* '''GStreamer dev files and plugins''': ''(optional)'' for image loading.<br />
* '''QT5 dev files''': ''(optional)'' for image displaying.<br />
* '''CppUTest dev files''': ''(optional)'' for unit testing.<br />
* '''Doxygen, Graphviz''': ''(optional)'' for documentation generation.<br />
* '''Wget, Unzip''': ''(optional)'' to download and unpack the sample images.<br />
<br />
In Debian-based systems (like Ubuntu) you can run:<br />
<syntaxhighlight line lang=bash><br />
sudo apt update<br />
sudo apt install -y \<br />
libjsoncpp-dev \<br />
libopencv-dev libopencv-core-dev \<br />
libopencv-video-dev libopencv-highgui-dev libopencv-videoio-dev \<br />
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \<br />
gstreamer1.0-plugins-bad gstreamer1.0-plugins-good gstreamer1.0-plugins-base \<br />
gstreamer1.0-libav gstreamer1.0-plugins-ugly \<br />
qtbase5-dev qtmultimedia5-dev libqt5multimedia5-plugins \<br />
git wget unzip libcpputest-dev doxygen graphviz \<br />
python3-pip ninja-build<br />
sudo -H pip3 install meson<br />
</syntaxhighlight><br />
<br />
'''For GstCUDA dependencie go to:<br />
[[GstCUDA]]'''<br />
<br />
=== Set up the environment ===<br />
<syntaxhighlight><br />
export SAMPLES=/path_where_download_sample.sh_is_located/<br />
export LIBPANORAMA_PATH=/path_where_libpanorama_is_installed/<br />
</syntaxhighlight><br />
<br />
=== Building the Project ===<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH<br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO_LIBPANORAMA/libpanorama<br />
cd libpanorama<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO_LIBPANORAMA` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text='''If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]'''}}<br />
<br />
<br />
There are some configuration options you can use, in case you want to fine tune your build. These are not necessary and we recommend not using them, unless you have a specific reason to.<br />
<br />
<center><br />
{| class="wikitable"<br />
|+ Advanced configuration options<br />
|-<br />
! Option name !! Possible values !! Description !! Default<br />
|-<br />
| examples || enabled/disabled || Whether to build or not the examples. || enabled<br />
|-<br />
| tests || enabled/disabled || Whether to build or not the tests. || enabled<br />
|-<br />
| docs || enabled/disabled || Whether to build or not the API docs. || enabled<br />
|-<br />
| npp || enabled/disabled || Whether to use CUDA (NPP) acceleration or not. || enabled<br />
|-<br />
| opencv || enabled/disabled || Whether to build or not OpenCV IO classes. || enabled<br />
|-<br />
| gstreamer || enabled/disabled || Whether to build or not GStreamer IO classes. || enabled<br />
|-<br />
| qt || enabled/disabled || Whether to build or not QT IO classes. || enabled<br />
|}<br />
</center><br />
<br />
=== Validating the Build ===<br />
<br />
To ensure the build was successful, run the default example with the provided samples.<br />
<br />
'''1.''' Download the sample images, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $SAMPLES<br />
./download_samples.sh<br />
</syntaxhighlight><br />
<br />
'''2.''' Run the example as:<br />
<syntaxhighlight lang=bash line><br />
cd $LIBPANORAMA_PATH<br />
./builddir/examples/equirectangular_to_rectilinear_npp examples/example_image.jpg <br />
</syntaxhighlight><br />
<br />
You should see an output as the one below:<br />
[[File:equirectangular-mountain-libpanorama-example.png|thumbnail|center|640px|Libpanorama example]]<br />
<br />
==GstRrPanoramaptz==<br />
<br />
This section introduces the GstRrPanoramaptz plugin, a component of GStreamer designed to apply Pan-Tilt-Zoom (PTZ) transformations to video panoramas using CUDA. Developed with a focus on high-performance video processing, this plugin supports real-time adjustments of panoramic video feeds, enabling dynamic viewpoint changes through pan, tilt, and zoom operations. Ideal for applications requiring interactive video navigation or automated surveillance, GstRrPanoramaptz extends GStreamer's capabilities with advanced video transformation techniques. Here, you'll find setup instructions, usage examples, and insights on integrating this plugin into your video processing pipeline, offering a comprehensive guide to leveraging its features for enhanced video manipulation.<br />
<br />
<br />
=== Set up the environment ===<br />
<syntaxhighlight><br />
export PANORAMA_PTZ_PATH=/path_where_gstrrpanoramaptz_is_installed/<br />
</syntaxhighlight><br />
<br />
===Building the project===<br />
<br />
After the [[Spherical_Video_PTZ/User_Guide/Building_and_Installation|Building and Installation]] of Spherical Video PTZ section follow this steps to build the rrpanoramaptz plugin for Gstreamer.<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
cd $PANORAMA_PTZ_PATH<br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO/gst-rr-panoramaptz<br />
cd gst-rr-panoramaptz<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text='''If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]'''}}<br />
<br />
=== Validating the Build ===<br />
<syntaxhighlight lang=bash line><br />
gst-inspect-1.0 rrpanoramaptz<br />
</syntaxhighlight><br />
Upon successful build validation with <code>gst-inspect-1.0 rrpanoramaptz</code>, the output will detail the plugin's comprehensive configuration, highlighting its purpose, capabilities, and properties. You should see an output as the one below:<br />
<syntaxhighlight lang=bash line><br />
...<br />
Pad Templates:<br />
SINK template: 'sink'<br />
Availability: Always<br />
Capabilities:<br />
video/x-raw<br />
format: RGBA<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
video/x-raw(memory:NVMM)<br />
format: RGBA<br />
width: [ 1, 2147483647 ]<br />
height: [ 1, 2147483647 ]<br />
framerate: [ 0/1, 2147483647/1 ]<br />
...<br />
</syntaxhighlight><br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Spherical_Video_PTZ&diff=53636Spherical Video PTZ/Getting Started/Spherical Video PTZ2024-03-22T15:56:51Z<p>Spalli: /* How does it work? */</p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==What is the Spherical Video PTZ?==<br />
<br />
This application, developed by [https://www.ridgerun.com/ RidgeRun], allows users to easily pan, tilt, and zoom over an image. It is designed to support both NVIDIA GPU devices and systems without an NVIDIA GPU, ensuring optimal performance on any device. You can pan and tilt the image from 0° to 360° and zoom-in or zoom-out from 0.1x to 10x. Use the following picture as a guide:<br />
<br />
[[File:Equirectangular-to-rectilinear-ptz.png|thumbnail|center|640px|Pan-tilt-zoom explanation example]]<br />
<br />
==How does it work?==<br />
It utilizes both the Equirectangular and Rectilinear projections to create its output. If you're unfamiliar with these terms, it's recommended to visit the [[Spherical_Video_PTZ/Getting_Started/Projections_Used|Projections]] section beforehand. The process involves taking an Equirectangular Image (a 360° image) as input and converting it into a Rectilinear Image based on the user's interaction with pan, tilt, and zoom properties. Refer to the following diagram for a more detailed explanation.<br />
<br />
<br />
<gallery widths="300px" heights="280px" mode="packed-hover"><br />
File:Equirectangular-mountain.jpg|thumb|Equirectangular input image (taken from [https://pixabay.com/photos/winter-panorama-mountains-snow-2383930/ here]).<br />
File:Picture-on-globe.png|thumb|Desired output section. <br />
File:Mountain-side-view.png|thumb|Rectilinear output image. <br />
</gallery><br />
<br />
==Features==<br />
Spherical Video PTZ uses 360° videos and generates an output projection depending on where the user is located in the image. The user interacts with the video through pan, tilt, and zoom properties to position themselves as desired, creating an effect where the user feels physically present at the capture location. The following subsections show examples of the application for each of the properties based on the following image:<br />
<br />
<br />
[[File:Equirectangular-paper-picture.png|thumbnail|center|640px|alt=Example of an 360 Video Image|Example of an 360 Video Image. Taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].]]<br />
<br />
<br />
===Pan===<br />
[[File:Panning.gif]]<br />
<br />
<br />
===Tilt===<br />
<br />
[[File:Tilting.gif]]<br />
<br />
===Zoom===<br />
<br />
[[File:Zoom.gif]]<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Spherical_Video_PTZ&diff=53635Spherical Video PTZ/Getting Started/Spherical Video PTZ2024-03-22T15:54:37Z<p>Spalli: /* What is the Spherical Video PTZ? */</p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==What is the Spherical Video PTZ?==<br />
<br />
This application, developed by [https://www.ridgerun.com/ RidgeRun], allows users to easily pan, tilt, and zoom over an image. It is designed to support both NVIDIA GPU devices and systems without an NVIDIA GPU, ensuring optimal performance on any device. You can pan and tilt the image from 0° to 360° and zoom-in or zoom-out from 0.1x to 10x. Use the following picture as a guide:<br />
<br />
[[File:Equirectangular-to-rectilinear-ptz.png|thumbnail|center|640px|Pan-tilt-zoom explanation example]]<br />
<br />
==How does it work?==<br />
It utilizes both the Equirectangular and Rectilinear projections to create its output. If you're unfamiliar with these terms, it's recommended to visit the [[Spherical_Video_PTZ/Getting_Started/Projections_Used|Projections]] section beforehand. The process involves taking an Equirectangular Image (a 360° image) as input and converting it into a Rectilinear Image based on the user's interaction with pan, tilt, and zoom properties. Refer to the following diagram for a more detailed explanation.<br />
<br />
<br />
<gallery widths="300px" heights="280px" mode="packed-hover"><br />
File:Equirectangular-mountain.jpg|thumb|Equirectangular input image (taken from [https://pixabay.com/photos/winter-panorama-mountains-snow-2383930/ here]).<br />
File:Picture-on-globe.png|thumb|Desired output section.<br />
File:Mountain-side-view.png|thumb|Rectilinear output image.<br />
</gallery><br />
<br />
<br />
==Features==<br />
Spherical Video PTZ uses 360° videos and generates an output projection depending on where the user is located in the image. The user interacts with the video through pan, tilt, and zoom properties to position themselves as desired, creating an effect where the user feels physically present at the capture location. The following subsections show examples of the application for each of the properties based on the following image:<br />
<br />
<br />
[[File:Equirectangular-paper-picture.png|thumbnail|center|640px|alt=Example of an 360 Video Image|Example of an 360 Video Image. Taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].]]<br />
<br />
<br />
===Pan===<br />
[[File:Panning.gif]]<br />
<br />
<br />
===Tilt===<br />
<br />
[[File:Tilting.gif]]<br />
<br />
===Zoom===<br />
<br />
[[File:Zoom.gif]]<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Spherical_Video_PTZ&diff=53634Spherical Video PTZ/Getting Started/Spherical Video PTZ2024-03-22T15:54:16Z<p>Spalli: /* What is the Spherical Video PTZ? */</p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==What is the Spherical Video PTZ?==<br />
This application, developed by [https://www.ridgerun.com/ RidgeRun], allows users to easily pan, tilt, and zoom over an image. It is designed to support both NVIDIA GPU devices and systems without an NVIDIA GPU, ensuring optimal performance on any device. You can pan and tilt the image from 0° to 360° and zoom-in or zoom-out from 0.1x to 10x. Use the following picture as a guide:<br />
<br />
[[File:Equirectangular-to-rectilinear-ptz.png|thumbnail|center|640px|Pan-tilt-zoom explanation example]]<br />
<br />
==How does it work?==<br />
It utilizes both the Equirectangular and Rectilinear projections to create its output. If you're unfamiliar with these terms, it's recommended to visit the [[Spherical_Video_PTZ/Getting_Started/Projections_Used|Projections]] section beforehand. The process involves taking an Equirectangular Image (a 360° image) as input and converting it into a Rectilinear Image based on the user's interaction with pan, tilt, and zoom properties. Refer to the following diagram for a more detailed explanation.<br />
<br />
<br />
<gallery widths="300px" heights="280px" mode="packed-hover"><br />
File:Equirectangular-mountain.jpg|thumb|Equirectangular input image (taken from [https://pixabay.com/photos/winter-panorama-mountains-snow-2383930/ here]).<br />
File:Picture-on-globe.png|thumb|Desired output section.<br />
File:Mountain-side-view.png|thumb|Rectilinear output image.<br />
</gallery><br />
<br />
<br />
==Features==<br />
Spherical Video PTZ uses 360° videos and generates an output projection depending on where the user is located in the image. The user interacts with the video through pan, tilt, and zoom properties to position themselves as desired, creating an effect where the user feels physically present at the capture location. The following subsections show examples of the application for each of the properties based on the following image:<br />
<br />
<br />
[[File:Equirectangular-paper-picture.png|thumbnail|center|640px|alt=Example of an 360 Video Image|Example of an 360 Video Image. Taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].]]<br />
<br />
<br />
===Pan===<br />
[[File:Panning.gif]]<br />
<br />
<br />
===Tilt===<br />
<br />
[[File:Tilting.gif]]<br />
<br />
===Zoom===<br />
<br />
[[File:Zoom.gif]]<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/Gst-launch&diff=53597Spherical Video PTZ/Examples/Gst-launch2024-03-21T17:48:38Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
==Using CPU==<br />
<br />
This command generates a video test pattern, applies PTZ transformations, and resizes the output video to 1280x720 pixels for display.<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc pattern=0 ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz zoom=2.2 tilt=40 pan=80 ! "video/x-raw,width=1280,height=720" ! queue ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
This pipeline reads an image, applies a 2x zoom transformation, and displays the result:<br />
<syntaxhighlight><br />
gst-launch-1.0 filesrc location=sample.jpg ! jpegdec ! videoscale ! video/x-raw,width=500,height=500 ! imagefreeze ! videoconvert ! video/x-raw,format=RGBA ! rrpanoramaptz zoom=2 ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
===Using cam:===<br />
<syntaxhighlight><br />
gst-launch-1.0 autovideosrc ! "video/x-raw,width=1920,height=1080" ! videoconvert ! "video/x-raw,format=(string)RGBA" ! rrpanoramaptz pan=80 tilt=40 zoom=2.2 ! "video/x-raw,width=1280,height=720" ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
==Using GPU==<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=1234 host=192.168.0.10<br />
</syntaxhighlight><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/Gst-launch&diff=53596Spherical Video PTZ/Examples/Gst-launch2024-03-21T17:47:02Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<br />
==Using CPU==<br />
<br />
This command generates a video test pattern, applies PTZ transformations, and resizes the output video to 1280x720 pixels for display.<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc pattern=0 ! "video/x-raw,width=1920,height=1080" ! rrpanoramaptz zoom=2.2 tilt=40 pan=80 ! "video/x-raw,width=1280,height=720" ! queue ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
This pipeline reads an image, applies a 2x zoom transformation, and displays the result:<br />
<syntaxhighlight><br />
gst-launch-1.0 filesrc location=sample.jpg ! jpegdec ! videoscale ! video/x-raw,width=500,height=500 ! imagefreeze ! videoconvert ! video/x-raw,format=RGBA ! rrpanoramaptz zoom=2 ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
Using cam:<br />
<syntaxhighlight><br />
gst-launch-1.0 autovideosrc ! "video/x-raw,width=1920,height=1080" ! videoconvert ! "video/x-raw,format=(string)RGBA" ! rrpanoramaptz pan=80 tilt=40 zoom=2.2 ! "video/x-raw,width=1280,height=720" ! videoconvert ! autovideosink sync=false<br />
</syntaxhighlight><br />
<br />
==Using GPU==<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=1234 host=192.168.0.10<br />
</syntaxhighlight><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Examples/GstD&diff=53595Spherical Video PTZ/Examples/GstD2024-03-21T17:15:28Z<p>Spalli: </p>
<hr />
<div>===Using GPU===<br />
<noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
<syntaxhighlight><br />
gst-launch-1.0 videotestsrc pattern=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080" ! rrpanoramaptz ! nvvidconv ! nvv4l2h264enc idrinterval=30 insert-aud=true insert-sps-pps=true insert-vui=true ! h264parse ! mpegtsmux ! udpsink port=1234 host=192.168.0.10<br />
</syntaxhighlight><br />
<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Spherical_Video_PTZ&diff=53568Spherical Video PTZ/Getting Started/Spherical Video PTZ2024-03-21T11:36:14Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==What is the Spherical Video PTZ?==<br />
This application, developed by [https://www.ridgerun.com/ RidgeRun], allows users to easily pan, tilt, and zoom over an image. It is designed to support both NVIDIA GPU devices and systems without an NVIDIA GPU, ensuring optimal performance on any device. You can pan and tilt the image from 0° to 360° and zoom-in or zoom-out from 0.1x to 10x. Use the following picture as a guide:<br />
<br />
<br />
[[File:Equirectangular-to-rectilinear-ptz.png|thumbnail|center|640px|Pan-tilt-zoom explanation example]]<br />
<br />
<br />
==How does it work?==<br />
It utilizes both the Equirectangular and Rectilinear projections to create its output. If you're unfamiliar with these terms, it's recommended to visit the [[Spherical_Video_PTZ/Getting_Started/Projections_Used|Projections]] section beforehand. The process involves taking an Equirectangular Image (a 360° image) as input and converting it into a Rectilinear Image based on the user's interaction with pan, tilt, and zoom properties. Refer to the following diagram for a more detailed explanation.<br />
<br />
<br />
<gallery widths="300px" heights="300px" mode="packed-hover"><br />
File:Equirectangular-mountain.jpg|thumb|Equirectangular input image (taken from [https://pixabay.com/photos/winter-panorama-mountains-snow-2383930/ here]).<br />
File:Picture-on-globe.png|thumb|Desired output section.<br />
File:Mountain-side-view.png|thumb|Rectilinear output image.<br />
</gallery><br />
<br />
<br />
==Features==<br />
Spherical Video PTZ uses 360° videos and generates an output projection depending on where the user is located in the image. The user interacts with the video through pan, tilt, and zoom properties to position themselves as desired, creating an effect where the user feels physically present at the capture location. The following subsections show examples of the application for each of the properties based on the following image:<br />
<br />
<br />
[[File:Equirectangular-paper-picture.png|thumbnail|center|640px|alt=Example of an 360 Video Image|Example of an 360 Video Image. Taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].]]<br />
<br />
===Pan===<br />
<br />
<gallery widths="400px" heights="300px" mode="packed-hover"><br />
File:Pan-example1.png|First example after panning over the based imaged taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].<br />
File:Pan-example2.png|Second example after panning over the based].<br />
</gallery><br />
<br />
===Tilt===<br />
<br />
<gallery widths="400px" heights="300px" mode="packed-hover"><br />
File:Tilt-example1.png|First example after tilting over the based imaged taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].<br />
File:Tilt-example2.png|Second example after tilting over the based imaged taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].<br />
</gallery><br />
<br />
<br />
===Zoom===<br />
<br />
<gallery widths="400px" heights="300px" mode="packed-hover"><br />
File:Zoom-out-example.png|Example after zooming-out over the based imaged taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].<br />
File:Zoom-in-example.png|Example after zooming-in over the based imaged taken from [https://openaccess.thecvf.com/content_WACV_2020/html/Chou_360-Indoor_Towards_Learning_Real-World_Objects_in_360deg_Indoor_Equirectangular_Images_WACV_2020_paper.html here].<br />
</gallery><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Projections_Used&diff=53567Spherical Video PTZ/Getting Started/Projections Used2024-03-21T11:34:01Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Overview==<br />
This section provides the theoretical explanations of the Equirectangular and Rectilinear projections, which are used on the Spherical Video PTZ.<br />
<br />
==Equirectangular Projection==<br />
Equirectangular images, also referred to as 360° images, capture a panoramic view from a fixed point where the imaging system is positioned. These images encapsulate a complete 360° perspective, allowing all surrounding information to be displayed within a single flat image. To illustrate this concept, consider visualising the Earth as a sphere and then "unfolding" it along the central meridian (shown by the red lines in the accompanying image). This "unfolding" process transforms the spherical surface into a plane image. Please note, that the resulting plane maintains a unique aspect ratio of 2:1, because of the vertical range covers 180° and the horizontal range covers 360° after the unfolding procedure.<br />
<br />
<gallery widths="300px" heights="200px" mode="packed-hover"><br />
File:Meridians.png|Meridians on earth globe.<br />
File:Map with meridians.png|Meridians on earth map.<br />
</gallery><br />
<br />
==Rectilinear Projection==<br />
<br />
The Rectilinear projection, also referred to as the Gnomonic Projection, is a method used to project the surface of a sphere (or a 360° image) onto a plane. Typically, the plane onto which the surface points are mapped is tangent to the sphere at a single point. This projection is accomplished by using the centre of the sphere as the projection point. It's important to note that the resulting plane does not intersect the center of the sphere. The diagram below provides a visual example of this process:<br />
<br><br />
<br />
<br />
[[File:Rectilinear-projection.png|thumbnail|center|640px|Rectilinear projection: great circle projection example]]<br />
<br />
<br />
The term "rectilinear" in the Rectilinear projection refers to its use of straight lines for the projection. This means that lines that are parallel in the real world remain parallel in the projection. Additionally, it is worth noting that every great circle (which is the largest circle that can be drawn on any given sphere) is transformed into a straight line in the resulting plane during this projection process.<br />
<br><br />
<br />
==Equirectangular to Rectilinear Projection==<br />
Now that both projections' workings are clear, let's delve into the crucial details: why is this transformation necessary? <br />
<br />
<br />
If we round up an Equirectangular image, we can construct a spherical image where the data is accurately positioned on the surface of the sphere. However, if a user chooses to crop the Equirectangular image directly, it would lead to an issue: the resulting image would exhibit irregularities due to the curvature of the sphere. Therefore, projecting this image into a Rectilinear image resolves the problem of perturbations in the desired output.<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/Getting_Started/Projections_Used&diff=53566Spherical Video PTZ/Getting Started/Projections Used2024-03-21T11:33:23Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==Overview==<br />
This section provides the theoretical explanations of the Equirectangular and Rectilinear projections, which are used on the Spherical Video PTZ.<br />
<br />
<br />
==Equirectangular Projection==<br />
Equirectangular images, also referred to as 360° images, capture a panoramic view from a fixed point where the imaging system is positioned. These images encapsulate a complete 360° perspective, allowing all surrounding information to be displayed within a single flat image. To illustrate this concept, consider visualising the Earth as a sphere and then "unfolding" it along the central meridian (shown by the red lines in the accompanying image). This "unfolding" process transforms the spherical surface into a plane image. Please note, that the resulting plane maintains a unique aspect ratio of 2:1, because of the vertical range covers 180° and the horizontal range covers 360° after the unfolding procedure.<br />
<br />
<br />
<gallery widths="300px" heights="200px" mode="packed-hover"><br />
File:Meridians.png|Meridians on earth globe.<br />
File:Map with meridians.png|Meridians on earth map.<br />
</gallery><br />
<br />
==Rectilinear Projection==<br />
<br />
The Rectilinear projection, also referred to as the Gnomonic Projection, is a method used to project the surface of a sphere (or a 360° image) onto a plane. Typically, the plane onto which the surface points are mapped is tangent to the sphere at a single point. This projection is accomplished by using the centre of the sphere as the projection point. It's important to note that the resulting plane does not intersect the center of the sphere. The diagram below provides a visual example of this process:<br />
<br><br />
<br />
<br />
[[File:Rectilinear-projection.png|thumbnail|center|640px|Rectilinear projection: great circle projection example]]<br />
<br />
<br />
The term "rectilinear" in the Rectilinear projection refers to its use of straight lines for the projection. This means that lines that are parallel in the real world remain parallel in the projection. Additionally, it is worth noting that every great circle (which is the largest circle that can be drawn on any given sphere) is transformed into a straight line in the resulting plane during this projection process.<br />
<br><br />
<br />
==Equirectangular to Rectilinear Projection==<br />
Now that both projections' workings are clear, let's delve into the crucial details: why is this transformation necessary? <br />
<br />
<br />
If we round up an Equirectangular image, we can construct a spherical image where the data is accurately positioned on the surface of the sphere. However, if a user chooses to crop the Equirectangular image directly, it would lead to an issue: the resulting image would exhibit irregularities due to the curvature of the sphere. Therefore, projecting this image into a Rectilinear image resolves the problem of perturbations in the desired output.<br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Building_and_Installation&diff=53519Spherical Video PTZ/User Guide/Building and Installation2024-03-20T12:41:09Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==WIP - Wiki==<br />
<br />
This wiki shows how to build the source code. It assumes you have already purchased a license and received access to the source code. If not, head to [[Birds Eye View/Getting Started/How to Get the Code|How to Get the Code]] for instructions on how to proceed.<br />
<br />
=== Install the Dependencies ===<br />
<br />
Before anything, ensure you have installed the following dependencies:<br />
<br />
* '''Git''': To clone the repository.<br />
* '''Meson''': To configure the project.<br />
* '''Ninja''': To build the project.<br />
* '''JsonCPP dev files''': For the parameter loading.<br />
* '''OpenCV dev files''': For panoramaptz Gstreamer element.<br />
* '''GstCUDA''': ''(optional)'' to download and unpack the sample images.<br />
* '''GStreamer dev files and plugins''': ''(optional)'' for image loading.<br />
* '''QT5 dev files''': ''(optional)'' for image displaying.<br />
* '''CppUTest dev files''': ''(optional)'' for unit testing.<br />
* '''Doxygen, Graphviz''': ''(optional)'' for documentation generation.<br />
* '''Wget, Unzip''': ''(optional)'' to download and unpack the sample images.<br />
<br />
In Debian-based systems (like Ubuntu) you can run:<br />
<syntaxhighlight line lang=bash><br />
sudo apt update<br />
sudo apt install -y \<br />
libjsoncpp-dev \<br />
libopencv-dev libopencv-core-dev \<br />
libopencv-video-dev libopencv-highgui-dev libopencv-videoio-dev \<br />
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \<br />
gstreamer1.0-plugins-bad gstreamer1.0-plugins-good gstreamer1.0-plugins-base \<br />
gstreamer1.0-libav gstreamer1.0-plugins-ugly \<br />
qtbase5-dev qtmultimedia5-dev libqt5multimedia5-plugins \<br />
git wget unzip libcpputest-dev doxygen graphviz \<br />
python3-pip ninja-build<br />
sudo -H pip3 install meson<br />
</syntaxhighlight><br />
<br />
For GstCUDA reference:<br />
[[GstCUDA]]<br />
<br />
=== Building the Project ===<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO/libpanorama<br />
cd libpanorama<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<br />
==== CUDA Accelerated ====<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text='''If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]'''}}<br />
<br />
<br />
There are some configuration options you can use, in case you want to fine tune your build. These are not necessary and we recommend not using them, unless you have a specific reason to.<br />
<br />
<center><br />
{| class="wikitable"<br />
|+ Advanced configuration options<br />
|-<br />
! Option name !! Possible values !! Description !! Default<br />
|-<br />
| examples || enabled/disabled || Whether to build or not the examples. || enabled<br />
|-<br />
| tests || enabled/disabled || Whether to build or not the tests. || enabled<br />
|-<br />
| docs || enabled/disabled || Whether to build or not the API docs. || enabled<br />
|-<br />
| npp || enabled/disabled || Whether to use CUDA (NPP) acceleration or not. || enabled<br />
|-<br />
| opencv || enabled/disabled || Whether to build or not OpenCV IO classes. || enabled<br />
|-<br />
| gstreamer || enabled/disabled || Whether to build or not GStreamer IO classes. || enabled<br />
|-<br />
| qt || enabled/disabled || Whether to build or not QT IO classes. || enabled<br />
|}<br />
</center><br />
<br />
<br />
=== Validating the Build ===<br />
<br />
To ensure the build was successful, run the default example with the provided samples.<br />
<br />
'''1.''' Download the sample images, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd samples<br />
./download_samples.sh<br />
cd ..<br />
</syntaxhighlight><br />
<br />
'''2.''' Run the example as:<br />
<syntaxhighlight lang=bash line><br />
./builddir/examples/equirectangular_to_rectilinear_npp examples/example_image.jpg <br />
</syntaxhighlight><br />
<br />
You should see an output as the one below:<br />
<br />
<br />
<!--[[File:bev-quickstart.png|thumbnail|center|320px|alt=Sample result|Resulting image after running the sample application]]--><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spallihttps://developer.ridgerun.com/wiki/index.php?title=Spherical_Video_PTZ/User_Guide/Building_and_Installation&diff=53518Spherical Video PTZ/User Guide/Building and Installation2024-03-20T12:38:11Z<p>Spalli: </p>
<hr />
<div><noinclude><br />
{{Spherical Video PTZ/Head|previous=|next=|metakeywords=}}<br />
</noinclude><br />
<br />
==WIP - Wiki==<br />
<br />
This wiki shows how to build the source code. It assumes you have already purchased a license and received access to the source code. If not, head to [[Birds Eye View/Getting Started/How to Get the Code|How to Get the Code]] for instructions on how to proceed.<br />
<br />
=== Install the Dependencies ===<br />
<br />
Before anything, ensure you have installed the following dependencies:<br />
<br />
* '''Git''': To clone the repository.<br />
* '''Meson''': To configure the project.<br />
* '''Ninja''': To build the project.<br />
* '''JsonCPP dev files''': For the parameter loading.<br />
* '''OpenCV dev files''': For panoramaptz Gstreamer element.<br />
* '''GstCUDA''': ''(optional)'' to download and unpack the sample images.<br />
* '''GStreamer dev files and plugins''': ''(optional)'' for image loading.<br />
* '''QT5 dev files''': ''(optional)'' for image displaying.<br />
* '''CppUTest dev files''': ''(optional)'' for unit testing.<br />
* '''Doxygen, Graphviz''': ''(optional)'' for documentation generation.<br />
* '''Wget, Unzip''': ''(optional)'' to download and unpack the sample images.<br />
<br />
In Debian-based systems (like Ubuntu) you can run:<br />
<syntaxhighlight line lang=bash><br />
sudo apt update<br />
sudo apt install -y \<br />
libjsoncpp-dev \<br />
libopencv-dev libopencv-core-dev \<br />
libopencv-video-dev libopencv-highgui-dev libopencv-videoio-dev \<br />
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \<br />
gstreamer1.0-plugins-bad gstreamer1.0-plugins-good gstreamer1.0-plugins-base \<br />
gstreamer1.0-libav gstreamer1.0-plugins-ugly \<br />
qtbase5-dev qtmultimedia5-dev libqt5multimedia5-plugins \<br />
git wget unzip libcpputest-dev doxygen graphviz \<br />
python3-pip ninja-build<br />
sudo -H pip3 install meson<br />
</syntaxhighlight><br />
<br />
For GstCUDA reference:<br />
[[GstCUDA]]<br />
<br />
=== Building the Project ===<br />
<br />
'''1.''' Start by cloning the project using the repository you have been given<br />
<br />
<syntaxhighlight lang=bash line><br />
git clone git@gitlab.ridgerun.com:$YOUR_REPO/libpanorama<br />
cd libpanorama<br />
</syntaxhighlight><br />
<br />
{{ambox|type=info|text=Replace `$YOUR_REPO` with the actual repository path you were given by RidgeRun}}<br />
<br />
'''2.''' Configure the project by running the following:<br />
<br />
==== CUDA Accelerated ====<br />
<syntaxhighlight lang=bash line><br />
meson builddir<br />
ninja -C builddir<br />
sudo ninja -C builddir install<br />
</syntaxhighlight><br />
<br />
{{ambox|type=content|text="If anything fails, please provide the output log of the configuration step to [mailto:support@ridgerun.com support@ridgerun.com]}}<br />
<br />
<br />
There are some configuration options you can use, in case you want to fine tune your build. These are not necessary and we recommend not using them, unless you have a specific reason to.<br />
<br />
<center><br />
{| class="wikitable"<br />
|+ Advanced configuration options<br />
|-<br />
! Option name !! Possible values !! Description !! Default<br />
|-<br />
| examples || enabled/disabled || Whether to build or not the examples. || enabled<br />
|-<br />
| tests || enabled/disabled || Whether to build or not the tests. || enabled<br />
|-<br />
| docs || enabled/disabled || Whether to build or not the API docs. || enabled<br />
|-<br />
| npp || enabled/disabled || Whether to use CUDA (NPP) acceleration or not. || enabled<br />
|-<br />
| opencv || enabled/disabled || Whether to build or not OpenCV IO classes. || enabled<br />
|-<br />
| gstreamer || enabled/disabled || Whether to build or not GStreamer IO classes. || enabled<br />
|-<br />
| qt || enabled/disabled || Whether to build or not QT IO classes. || enabled<br />
|}<br />
</center><br />
<br />
<br />
=== Validating the Build ===<br />
<br />
To ensure the build was successful, run the default example with the provided samples.<br />
<br />
'''1.''' Download the sample images, if you haven't already.<br />
<br />
<syntaxhighlight lang=bash line><br />
cd samples<br />
./download_samples.sh<br />
cd ..<br />
</syntaxhighlight><br />
<br />
'''2.''' Run the example as:<br />
<syntaxhighlight lang=bash line><br />
./builddir/examples/equirectangular_to_rectilinear_npp examples/example_image.jpg <br />
</syntaxhighlight><br />
<br />
You should see an output as the one below:<br />
<br />
<br />
<!--[[File:bev-quickstart.png|thumbnail|center|320px|alt=Sample result|Resulting image after running the sample application]]--><br />
<br />
<br />
<noinclude><br />
{{Spherical Video PTZ/Foot||}}<br />
</noinclude></div>Spalli