Xavier/JetPack 5.0.2/Getting Started/Components: Difference between revisions
(Created page with "<noinclude> {{Xavier/Head}} </noinclude> Fill this area with the relevant information Make sure you update the "previous" and "next" below! Do not include the Xavier/ prefix...") |
No edit summary |
||
Line 3: | Line 3: | ||
</noinclude> | </noinclude> | ||
== What is in JetPack 4.0 ? == | |||
{| class="wikitable" style="margin-right: 22em;" | |||
|- | |||
! Package | |||
! Version | |||
! Description | |||
|- | |||
| Linux for Tegra (L4T) | |||
| 31.0.1 | |||
| Operative System, includes toolchain, u-boot, kernel and filesystem (Reference filesystem is now derived from Ubuntu 18.04), | |||
NVIDIA Tegra User Space Drivers and sample applications. | |||
|- | |||
| TensorRT | |||
| 5.0 RC | |||
| A platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. | |||
|- | |||
| cuDNN | |||
| 7.3 | |||
| The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. | |||
|- | |||
| CUDA | |||
| 10 | |||
| CUDA 10 is a software development platform for building GPU-accelerated applications. | |||
|} | |||
<noinclude> | <noinclude> | ||
{{Xavier/Foot|<Replace with "previous" page>|<Replace with "next" page>}} | {{Xavier/Foot|<Replace with "previous" page>|<Replace with "next" page>}} | ||
</noinclude> | </noinclude> |
Revision as of 17:43, 20 September 2018
What is in JetPack 4.0 ?
Package | Version | Description |
---|---|---|
Linux for Tegra (L4T) | 31.0.1 | Operative System, includes toolchain, u-boot, kernel and filesystem (Reference filesystem is now derived from Ubuntu 18.04),
NVIDIA Tegra User Space Drivers and sample applications. |
TensorRT | 5.0 RC | A platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. |
cuDNN | 7.3 | The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. |
CUDA | 10 | CUDA 10 is a software development platform for building GPU-accelerated applications. |