Using NVIDIA Jetson Modules
RidgeRun Data Plane Kit RidgeRun documentation is currently under development. |
Support for NVIDIA Jetson
NVIDIA Jetson modules and developer kits do not have native support for DPDK due to the NIC manufacturer. In the case of the Jetson Xavier modules, the ethernet interface is based on the Marvell 88E1512PB2, whereas the Jetson Orin modules are based on a Realtek module. In both cases, there is no support for kernel bypassing. More information can be found in the DPDK Compatible Hardware site.
For NVIDIA Jetson modules, it is recommended that an external PCIe or an M2-based network card compatible with DPDK be installed. NVIDIA ConnectX can be used to boost Jetson's networking, providing DPDK and RDMA support.
RidgeRun Use Case
At RidgeRun, we are currently testing the Intel I210-T1 at 1 Gbps for DPDK in both X86 and NVIDIA Jetson Systems
Stay tuned for the updates!
PCIe network card (Intel I210-T1) on Jetson AGX Orin
NIC Usage
1. Connect the NIC: Attach the NIC to the Jetson AGX Orin using the external PCIe port and power on the device.
2. Check Ethernet Interfaces: After booting up, run the following command to check the available network interfaces:
nvidia@ubuntu:~$ ip addr
And you should get a result similar to this:
... 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 98:b7:85:1f:ca:d7 brd ff:ff:ff:ff:ff:ff altname enP5p1s0 inet 192.168.100.112/24 brd 192.168.100.255 scope global dynamic noprefixroute eth0 valid_lft 86265sec preferred_lft 86265sec inet6 fe80::a8ac:52de:acc:c9b9/64 scope link noprefixroute valid_lft forever preferred_lft forever ... 6: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1466 qdisc mq state DOWN group default qlen 1000 link/ether 48:b0:2d:78:ba:00 brd ff:ff:ff:ff:ff:ff inet 192.168.100.1/24 brd 192.168.100.255 scope global eth1 valid_lft forever preferred_lft forever ...
Here, eth0
and eth1
interfaces should appear.
2.1 Troubleshoot Connectivity Issues: If there’s no connectivity (no ping response or SSH access to the board):
- Open Settings > Network.
- Check for a duplicate entry of the eth1 interface. This duplicate entry might be manually assigning the IP address 192.168.100.1, leading to a conflict.
- Remove the Duplicate Interface:
- In Settings > Network, locate eth1.
- Click the gear icon next to eth1 to access configuration options.
- Select Remove Connection Profile to delete the duplicate entry.
3. Verify Connectivity: Once the duplicate profile is removed, the conflict should be resolved, restoring normal connectivity to the board.
DPDK Intallation
Follow the instructions in Setup Data Plane Kit/From source to install DPDK on your system
lleon: Please, edit the link to point to https://developer.ridgerun.com/wiki/index.php/RidgeRun_Data_Plane_Kit/Setup_Data_Plane_Kit/From_source and update the info. The Holoscan will point to here in the future (please remove this box when addressed) |
Adding the uio_pci_generic Driver
1. Download Linux for Tegra
Follow the instructions from this wiki to install Linux for Tegra using SDKmanager. For this case Jetpack 6.0 will be used.
2. Download the NVIDIA's Driver Package (BSP) sources
Download the source files from this link.
The file you need to download is under Downloads and Links -> SOURCES -> Driver Package (BSP) Sources. It will download a file named public_source.tbz2
Use the following commands to extract the files under the Linux for Tegra directory:
# Go into the target HW image folder created in step 1. cd <target HW image folder>/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra cp ~/Downloads/public_sources.tbz2 . tar -xjf public_sources.tbz2 Linux_for_Tegra/source/kernel_src.tbz2 --strip-components 2 tar -xjf public_sources.tbz2 Linux_for_Tegra/source/kernel_oot_modules_src.tbz2 --strip-components 2 tar -xjf public_sources.tbz2 Linux_for_Tegra/source/nvidia_kernel_display_driver_source.tbz2 --strip-components 2 mkdir sources tar -xjf kernel_src.tbz2 -C sources tar -xjf kernel_oot_modules_src.tbz2 -C sources tar -xjf nvidia_kernel_display_driver_source.tbz2 -C sources
After the steps above are done, your directory should look like the following:
~/nvidia/nvidia_sdk/JetPack_6.0_DP_Linux_DP_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/sources$ tree -L 1 ├── generic_rt_build.sh ├── hardware ├── hwpm ├── kernel ├── kernel_oot_modules_src.tbz2 ├── kernel_src_build_env.sh ├── kernel_src.tbz2 ├── Makefile ├── nvbuild.sh ├── nvcommon_build.sh ├── nvdisplay ├── nvethernetrm ├── nvgpu ├── nvidia_kernel_display_driver_source.tbz2 ├── nvidia-oot ├── out └── public_sources.tbz2
3. Set the Development Environment
3.1. Install dependencies in the host PC. Make sure the following dependencies are installed on your system:
- wget
- lbzip2
- build-essential
- bc
- zip
- libgmp-dev
- libmpfr-dev
- libmpc-dev
- vim-common # For xxd
In Debian based systems you can run the following:
sudo apt install wget lbzip2 build-essential bc zip libgmp-dev libmpfr-dev libmpc-dev vim-common
3.2. Get the Toolchain:
If you haven't already, download the toolchain. The toolchain is the set of tools required to cross-compile the Linux kernel. You can download the Bootlin Toolchain gcc 11.3 from the linux-archive.
The file is under Downloads and Links -> TOOLS -> Bootlin Toolchain gcc 11.3. It will download a file named aarch64--glibc--stable-2022.08-1.tar.bz2.
Extract the files with these commands:
cd $HOME mkdir -p $HOME/l4t-gcc cd $HOME/l4t-gcc cp ~/Downloads/aarch64--glibc--stable-2022.08-1.tar.bz2 tar -xjf aarch64--glibc--stable-2022.08-1.tar.bz2
3.3. Export the Environment variables:
Open a terminal and run the following commands to export the environment variables that will be used in the next steps. Please keep in mind that if the sources are in a different directory than the one defined in DEVDIR, a different DEVDIR needs to be defined, always pointing at the directory where the Linux_for_Tegra folder is.
export DEVDIR=~/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu- export INSTALL_MOD_PATH=$DEVDIR/rootfs/ export KERNEL_HEADERS=$DEVDIR/sources/kernel/kernel-jammy-src
4. Compile the Kernel and Modules
4.1. Go to the kernel directory:
cd $DEVDIR/sources/kernel/kernel-jammy-src
4.2. Execute the default configuration. You can enable any kernel configuration if you wish:
make menuconfig
4.3. Go to the sources directory:
cd $DEVDIR/sources
4.4. Include the uio_pci_generic
driver to the kernel configuration, in the file $DEVDIR/sources/kernel/kernel-jammy-src/arch/arm64/configs/defconfig add the following line:
CONFIG_UIO_PCI_GENERIC=m
4.5. Compile the kernel:
make -C kernel
4.6. Install the kernel:
sudo -E make install -C kernel
4.7. Compile the Out-of-Tree modules:
make modules
4.8. Install the modules:
sudo -E make modules_install
5. Install the new kernel image and modules
Copy the kernel image and the modules to the board, you can use for this the scp and rsync commands:
# This command is used instead of scp so the symbolic links are copied and not the files they point to rsync -Wac --progress ../rootfs/lib/modules/5.15.136-tegra/* <board username>@<board IP>:/tmp/5.15.136-tegra/ scp $DEVDIR/rootfs/boot/Image <board username>@<board IP>:/tmp
And in the board:
cd /lib/modules # Create a backup of the modules sudo mv 5.15.136-tegra 5.15.136-tegra-bu sudo mv /tmp/5.15.136-tegra/ ./5.15.136-tegra-uio-supported sudo rm -rf 5.15.136-tegra-uio-supported/source sudo rm -rf 5.15.136-tegra-uio-supported/build sudo ln -s /usr/src/linux-headers-5.15.136-tegra-ubuntu22.04_aarch64/3rdparty/canonical/linux-jammy/kernel-source 5.15.136-tegra-uio-supported/build sudo ln -s 5.15.136-tegra-uio 5.15.136-tegra cd /boot # Create a backup of the kernel image sudo mv Image Image-bu sudo mv /tmp/Image Image-uio-supported sudo ln -s Image-uio Image
6. Reboot the board
7. Load the driver module
Use this command (in the board) to load the uio_pci_generic driver:
sudo modprobe uio_pci_generic
lleon: The instructions are OK. However, would it be possible to have the binaries in a tarball just ready to go? This will ease the evaluation (please remove this box when addressed) |
Running a Sample Application
1. Load the Driver and Bind the NIC
lleon: Instead of using "section", use the name of the section wiki and the heading. Example: Using NVIDIA Jetson Modules/DPDK_Intallation (please remove this box when addressed) |
1.1. Run the following command to check the NIC information:
dpdk-devbind.py -s
It will give a similar output to this:
Network devices using kernel driver =================================== 0001:01:00.0 'RTL8822CE 802.11ac PCIe Wireless Network Adapter c822' if=wlan0 drv=rtl88x2ce unused=rtl8822ce,vfio-pci,uio_pci_generic 0005:01:00.0 'I210 Gigabit Network Connection 1533' if=eth0 drv=igb unused=vfio-pci,uio_pci_generic No 'Baseband' devices detected ============================== No 'Crypto' devices detected ============================ No 'DMA' devices detected ========================= No 'Eventdev' devices detected ============================== No 'Mempool' devices detected ============================= No 'Compress' devices detected ============================== No 'Misc (rawdev)' devices detected =================================== No 'Regex' devices detected =========================== No 'ML' devices detected ========================
Where in this case the target NIC is the Ethernet card, which has the interface name eth0
and the PCI address 0005:01:00.0
.
1.2. Run the following commands to prepare your NIC for DPDK:
sudo ifconfig eth0 down sudo dpdk-devbind.py --bind=uio_pci_generic 0005:01:00.0
1.3. Run the following command to allocate hugepages for DPDK to use:
sudo dpdk-hugepages.py --pagesize 2M --setup 256M --node 0
2. Verify with an Example
Use the ethtool example application to verify everything is working correctly:
cd <Installation path>/dpdk-stable-23.11.2/<build dir>/examples sudo ./dpdk-ethtool
Expected output should include the NIC being successfully detected and initialized. The sample app is a command line tool, use the command drvinfo
to see the driver information, similar to:
EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: VFIO support initialized EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0005:01:00.0 (socket -1) TELEMETRY: No legacy callbacks, legacy socket not created Number of NICs: 1 Init port 0.. EthApp> drvinfo Port 0 driver: net_e1000_igb (ver: DPDK 23.11.2) firmware-version: 3.16, 0x800004ff, 1.304.0 bus-info: 0005:01:00.0 EthApp> Closing port 0... Done