NVIDIA Jetson AGX Thor: Introduction & Getting Started
The NVIDIA Jetson AGX Thor documentation from RidgeRun is presently being developed. |
NVIDIA Jetson AGX Thor
Introduction
The NVIDIA® Jetson AGX Thor™ is the newest and most powerful member of the Jetson family, combining a 14-core Arm® Neoverse V3AE CPU, a Blackwell Architecture GPU, and 128 GB LPDDR5X memory. It represents a generational leap over Jetson Orin in both performance and features.
Thor launches with JetPack 7.0, a unified software stack integrating CUDA®, cuDNN, TensorRT™, Holoscan SDK, and more.
This page is the introduction to the Jetson Thor wiki. Here you will learn how the Jetson ecosystem is structured and how this wiki is organized to guide you from first evaluation to advanced development.
The Jetson Ecosystem
The NVIDIA Jetson platforms — including Thor — are built around three main components:
- System on Module (SoM):
The SoM is the heart of Jetson, it contains the CPU, GPU, memory, accelerators (like PVA 3.0), and high-speed I/O. It's designed to be production-ready and embedded directly into final products. For Jetson Thor this SoM is the Jetson T5000 Module. [Learn more]
- Carrier Board:
The carrier board exposes the SoM's capabilities by providing physical connectors, power delivery, and interfaces. The NVIDIA Developer Kit serves as the reference design. It is useful to evaluate the capabilities of the SoM in the prototyping phase of your project, while ecosystem partners (Auvidea, Connect Tech, etc.) offer carrier boards for different and specific requirements. [Learn more]
- Board Support Package (BSP) & Software:
This includes device drivers, kernel modules, and JetPack 7.0 — the unified software environment providing access to Thor's hardware and NVIDIA AI SDKs. TODO Learn more
Together, these three elements — SoM + Carrier + BSP/Software — form the Jetson Thor platform.
Why Jetson AGX Thor?
Compared to its predecesor Jetson Orin, Thor provides:
- More CPU power (14 Neoverse V3AE vs 12 Cortex-A78AE)
- A stronger GPU (2560 CUDA cores, 96 Tensor cores, Transformer Engine)
- Twice the memory (128 GB LPDDR5X vs 64 GB LPDDR5)
- Higher bandwidth (273 GB/s vs 204 GB/s)
- Much higher AI throughput (up to 2070 FP4 TFLOPs, ~7x more than Orin's 275 TOPS)
- New features like Holoscan Sensor Bridge and Multi-Instance GPU (MIG)
This makes Thor ideal for complex computational and AI tasks like humanoid robots, autonomous machines, healthcare AI, generative AI at the edge, and real-time multimodal sensor fusion.
This section is a main guide for the rest of the wiki. Each block below explains what the corresponding section covers and what you will find inside.
- SoM Overview - Highlights the main characteristics of the T5000 SoM.
- Carrier Boards - Describes Jetson Thor carrier boards.
- Getting Started - Install JetPack, flash with the GUI installer, and explore included components.
- Components - What's inside JetPack (CUDA, TensorRT, DeepStream, etc.) and example guides.
- Compiling Source Code - Build kernel, device tree (DTB), and bootloader from source.
- Flashing the Board from Cmdline - Flash the board manually via the command line.
- Performance Tuning - Evaluate performance, tune power, and maximize efficiency.
- Getting into the Board - Use the serial console, SSH, and Ubuntu login.
- Installing Packages - Install additional packages and development libraries.
- IMX477+J20 Driver - Example sensor and driver integration.
- GPU Overview - Feature summary and architecture notes.
- GPU Benchmarks - Performance results and Orin comparisons.
- Capture and Display - Run capture-to-display pipelines.
- H264 Pipelines - Encode and stream with H.264.
- H265 Pipelines - Encode and stream with H.265.
- Holoscan Overview - SDK concepts and setup.
- Holoscan Examples - Example applications on Thor.
- PVA Overview - Architecture and capabilities.
- PVA Usage - Practical acceleration examples.
- GstPTZR - Pan/Tilt/Zoom/Rotate plugin.
- GstCUDA - GPU-accelerated processing framework.
- GstColorTransfer - Color transformation utilities.
- GstWebRTC - Peer-to-peer WebRTC streaming.
- GstRtspSink - RTSP output element.
- GstInterpipe - Cross-pipeline linking.
- GstInference - Deep-learning inference integration.
- GstVPI - NVIDIA VPI wrappers.
- GStreamer Daemon - Headless pipeline control service.
- GstShark - Profiling and performance analysis tools.
- Panoramic Stitching - Multi-camera 360° stitching reference.
- WebRTC Streaming - Low-latency WebRTC streaming reference.
- Reference Documentation Main Page - Central entry point for specs and manuals.