Xavier/JetPack 5.0.2/Getting Started/Components: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 3: Line 3:
</noinclude>
</noinclude>


= JetPack Components =
== JetPack Components ==


{| class="wikitable" style="margin-right: 22em;"
{| class="wikitable" style="margin-right: 22em;"
Line 41: Line 41:
|}
|}


= L4T =
== L4T ==


L4T (Linux for Tegra) is the NVIDIA Linux OS version, with changes to support its platforms and integrate custom functionalities to manage Kernel, Device Tree and User Space features. It also includes all the drivers and device tree changes to support the EVM carrier boards. Additional libraries and support to easily use the display and image processing capabilities are added, in order to use the GPU force exposed by the NVIDIA platforms.
L4T (Linux for Tegra) is the NVIDIA Linux OS version, with changes to support its platforms and integrate custom functionalities to manage Kernel, Device Tree and User Space features. It also includes all the drivers and device tree changes to support the EVM carrier boards. Additional libraries and support to easily use the display and image processing capabilities are added, in order to use the GPU force exposed by the NVIDIA platforms.
Line 60: Line 60:
* X11 Support
* X11 Support


= Multimedia API =
== Multimedia API ==
Multimedia API also known as MMAPI is a collection of lower-level APIs that support application development. MMAPI is for developers who use a custom framework or wish to avoid a framework like GStreamer.  
Multimedia API also known as MMAPI is a collection of lower-level APIs that support application development. MMAPI is for developers who use a custom framework or wish to avoid a framework like GStreamer.  


Line 74: Line 74:
* '''Samples''' that demostrate image processing with CUDA, object detection and classification with cuDNN, TensorRT and OpenCV4Tegra usage.
* '''Samples''' that demostrate image processing with CUDA, object detection and classification with cuDNN, TensorRT and OpenCV4Tegra usage.


= TensorRT =
== TensorRT ==
TensorRT is a C++ library that facilitates high performance inference on NVIDIA platforms. It is designed to work with the most popular deep learning frameworks, such as TensorFlow, Caffe, PyTorch etc. It focus specifically on running an already trained model, to train the model, other libraries like cuDNN are more suitable. Some frameworks like TensorFlow have integrated TensorRT so that it can be used to accelerate inference within the framework. For other frameworks like Caffe, a parser is provided to generate a model that can be imported on TensorRT. For more information on using this library read our wiki [[Xavier/Deep_Learning/TensorRT|here]]
TensorRT is a C++ library that facilitates high performance inference on NVIDIA platforms. It is designed to work with the most popular deep learning frameworks, such as TensorFlow, Caffe, PyTorch etc. It focus specifically on running an already trained model, to train the model, other libraries like cuDNN are more suitable. Some frameworks like TensorFlow have integrated TensorRT so that it can be used to accelerate inference within the framework. For other frameworks like Caffe, a parser is provided to generate a model that can be imported on TensorRT. For more information on using this library read our wiki [[Xavier/Deep_Learning/TensorRT|here]]


= CUDA =
== CUDA ==


CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Line 85: Line 85:
For a complete summary of the samples go to the [[Xavier/Processors/GPU/CUDA|CUDA]] section.
For a complete summary of the samples go to the [[Xavier/Processors/GPU/CUDA|CUDA]] section.


=VisionWorks=
==VisionWorks==
VisionWorks is a software development package for computer vision (CV) and image processing. It implements and extends the Khronos OpenVX standard and has optimizations using the Xavier's GPU.
VisionWorks is a software development package for computer vision (CV) and image processing. It implements and extends the Khronos OpenVX standard and has optimizations using the Xavier's GPU.