Jump to content

Image Stitching for NVIDIA Jetson/Performance/Orin: Difference between revisions

no edit summary
No edit summary
Line 383: Line 383:
</source>
</source>


= Orin AGX =
= AGX Orin =


== Platform Setup ==
== Platform Setup ==
Line 422: Line 422:
[[File:Latency4kstitcherorinagx.png|1000x500px|thumb|center|Latency on 3840x2160 images with and without jetson_clocks.sh]]
[[File:Latency4kstitcherorinagx.png|1000x500px|thumb|center|Latency on 3840x2160 images with and without jetson_clocks.sh]]


== Pipeline structure ==  
= Orin NX =


{{Review|I would move this section before the sections talking about performance}}
== Framerate ==
{{Review|Also, you can put the 3 pipelines used for 2, 3 and 6 inputs}}
 
=== 1920x1080 ===
 
The next graph shows the amount of fps for each setup of inputs with and without jetson clocks.
 
=== 4K ===
 
The next graph shows the amount of fps for each setup of inputs with and without jetson clocks.
 
== Latency ==
 
 
Following the same structure for latency in the AGX Orin, the pictures below show the latency of the ''cuda-stitcher'' element, for multiple input images and multiple resolutions, as well as using and not using the ''jetson_clocks'' script.


To replicate the results using your images, videos, or cameras, you can use the following pipeline as a base for the case of 2 cameras, then you can add the other inputs for the other cases. Also, you can adjust the resolution if needed.


<source lang=bash>
INPUT_0=<VIDEO_INPUT_0>
INPUT_1=<VIDEO_INPUT_1>
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! queue ! nvvidconv !  stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! queue ! nvvidconv ! stitcher.sink_1 \
stitcher. ! perf print-cpu-load=true ! fakesink -v
</source>


= Jetson Orin Platforms CPU Usage =  
= Jetson Orin Platforms CPU Usage =  
457

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.