Jump to content

Xavier/JetPack 5.0.2/Getting Started/Components: Difference between revisions

no edit summary
mNo edit summary
No edit summary
Line 3: Line 3:
</noinclude>
</noinclude>


== What is in JetPack 4.1 ? ==
= JetPack Components =


{| class="wikitable" style="margin-right: 22em;"
{| class="wikitable" style="margin-right: 22em;"
Line 39: Line 39:
| 3.3.1
| 3.3.1
| Computer vision library for C++.
| Computer vision library for C++.
|}
= L4T =
L4T (Linux for Tegra) is the NVIDIA Linux OS version, with changes to support its platforms and integrate custom functionalities to manage Kernel, Device Tree and User Space features. It also includes all the drivers and device tree changes to support the EVM carrier boards. Additional libraries and support to easily use the display and image processing capabilities are added, in order to use the GPU force exposed by the NVIDIA platforms.
Features:
* Kernel version 4.9
* Support for 64-bit user space and runtime libraries
* Vulkan Support
* V4L2 media-controller driver support for camera sensors (bypassing ISP)
* libargus provides low-level frame-synchronous API for camera applications
** RAW output CSI cameras needing ISP can be used with either libargus or GStreamer plugin
* Media APIs:
** OpenGL 4.6 Beta
** OpenGL ES 3.2
** OpenGL ES path extensions
** EGL 1.5 with EGLImage
* X Resize, Rotate and Reflect Extension (RandR) 1.4
* X11 Support
= Multimedia API =
Multimedia API also known as MMAPI is a collection of lower-level APIs that support application development. MMAPI is for developers who use a custom framework or wish to avoid a framework like GStreamer.
MMAPI should be downloaded separately from the NVIDIA Jetson Download Center [https://developer.nvidia.com/embedded/dlc/l4t-multimedia-api-31-0-1 L4T Multimedia API 31.0.1 ]
Components:
* '''Libargus''' for imaging applications.
* '''Buffer utility''' for buffer allocation, management and sharing.
* '''NVOSD''' for On-Screen display.
* '''V4l2 API extensions''' for video converter, decoder and encoder support.
* '''Aplication framework'''
* '''Samples''' that demostrate image processing with CUDA, object detection and classification with cuDNN, TensorRT and OpenCV4Tegra usage.
= TensorRT =
TensorRT is a C++ library that facilitates high performance inference on NVIDIA platforms. It is designed to work with the most popular deep learning frameworks, such as TensorFlow, Caffe, PyTorch etc. It focus specifically on running an already trained model, to train the model, other libraries like cuDNN are more suitable. Some frameworks like TensorFlow have integrated TensorRT so that it can be used to accelerate inference within the framework. For other frameworks like Caffe, a parser is provided to generate a model that can be imported on TensorRT. For more information on using this library read our wiki [[Xavier/Deep_Learning/TensorRT|here]]
= CUDA =
CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
You can learn more about the Volta GPU characteristics and CUDA support on our [[Xavier/Processors/GPU|Volta GPU]] wiki page.
For a complete summary of the samples go to the [[Xavier/Processors/GPU/CUDA|CUDA]] section.
=VisionWorks=
VisionWorks is a software development package for computer vision (CV) and image processing. It implements and extends the Khronos OpenVX standard and has optimizations using the Xavier's GPU.
VisionWorks is installed with the JetPack and includes samples that can be used as a starting point for development. The source code is located in <code>/usr/share/visionworks/sources/</code> and should be moved using the following commands
<syntaxhighlight lang=bash>
/usr/share/visionworks/sources/install-samples.sh ~/
cd ~/VisionWorks-1.6-Samples/
make -j4 # add dbg=1 to make debug build
</syntaxhighlight>
this will compile all the samples and save them into the <code>~/VisionWorks-1.6-Samples/bin</code> directory, individual samples can be compiled from their own directory but the executable will be stored in the same directory as when all samples are built from the top directory.
{| class="wikitable"
|-
! Sample !! Description
|-
| Feature Tracker || A local feature tracking demo using the Harris or FAST feature detector and tracks them using Lucas-Kanade algorithm.
|-
| Stereo Matching || Uses the Semi-Global Matching algorithm to evaluate disparity and display the stereo matching.
|-
| CUDA Layer Object Tracker || Uses pyramidal Optical Flow to perform object tracking.
|-
| Hough Transform || Detects circles and lines via the Hough Transform.
|-
| Video Stabilizer || Uses the Harris feature detector and sparse pyramidal optical flow to estimate and stabilize a frame's motion.
|-
| Motion Estimation || Implements the NVIDIA Iterative Motion Estimation algorithm to estimate motion in a frame.
|-
| OpenCV and NPP Interop || Shows interoperability of VisionWorks with the NPP and OpenCV library. It takes two images, blurs them and performs alpha blending between them.
|-
| OpenGL Interop || Shows interoperability of VisionWorks and OpenGL.
|-
| Video Playback || Shows basic image and video I/O.
|-
| NVIDIA Gstreamer Camera Capture || Shows NVIDIA GStreamer camera access.
|}
|}


661

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.