Yocto Support for NVIDIA® Jetson™ with JetPack 6 Integration - DeepStream

From RidgeRun Developer Wiki



Previous: Adding NVIDIA Packages Index Next: Additional Topics






NVIDIA Docker Container

NVIDIA Docker Setup

In addition to the basic Yocto and the meta-tegra layers, you will need the meta-virtualization layer and the meta-oe, meta-networking, meta-filesystems, and meta-python layers from the meta-openembedded repository.

1. Download repositories to the Yocto working directory

cd $YOCTO_DIR
git clone https://git.yoctoproject.org/git/meta-virtualization
cd meta-virtualization
git checkout $BRANCH
cd $YOCTO_DIR
git clone https://git.openembedded.org/meta-openembedded
cd meta-openembedded
git checkout $BRANCH

2. Add layers to conf/bblayers.conf

  • Run below commands at the terminal
cd $YOCTO_DIR/build
bitbake-layers add-layer ../meta-openembedded/meta-oe/
bitbake-layers add-layer ../meta-openembedded/meta-python/
bitbake-layers add-layer ../meta-openembedded/meta-networking/
bitbake-layers add-layer ../meta-openembedded/meta-filesystems/
bitbake-layers add-layer ../meta-virtualization/

3. Add Docker packages and virtualization compatibility

In your build/local.conf file add the following lines:

#Base packages
IMAGE_INSTALL:append = " cuda-samples tensorrt-samples cudnn-samples gstreamer1.0-plugins-tegra"

#Support packages for docker support
IMAGE_INSTALL:append = " nvidia-docker libnvidia-container"

DISTRO_FEATURES:append = " ldconfig virtualization seccomp"

DeepStream Setup

NVIDIA has several containers available at the NGC Platform. DeepStream support is available through containers using nvidia-docker on Jetson systems. More information about the DeepStream image for L4T and Jetson Devices can be found in DeepStream 6.0.

The deepstream image requires:

  • Jetson device running L4T r36.3.0
  • At least JetPack 6.0 (scarthgap branch on repository meta-tegra )

Before you continue, you need to follow the NVIDIA Docker Setup section of this wiki if you haven't already.

1. Login into Jetson board and download the docker image

docker pull nvcr.io/nvidia/deepstream-l4t:7.0-samples-multiarch

2. Allow external applications to connect to the host's X display

xhost +

3. Run the docker container using the nvidia-docker (use the desired container tag in the command line below):

docker run -it --rm --net=host --runtime nvidia -w /opt/nvidia/deepstream/deepstream-7.0 nvcr.io/nvidia/deepstream-l4t:7.0-samples-multiarch

Or if you get an error about the display support:

docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.0 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:7.0-samples-multiarch

Yocto Recipes

The Yocto recipes related to Deepstream are in the meta-tegra-community repository.

1. Download the repository to the Yocto working directory

cd $YOCTO_DIR
git clone https://github.com/OE4T/meta-tegra-community.git
cd meta-tegra-community
git checkout scarthgap

2. Add layer to conf/bblayers.conf

cd $YOCTO_DIR/build
bitbake-layers add-layer ../meta-tegra-community/

Recipes Description

meta-tegra includes three recipes for deepstream support: deepstream-7.0, deepstream-pyds and deepstream-7.0-tests.

  • deepstream-7.0: installs the DeepStream SDK prebuilt libraries and GStreamer plugins
  • deepstream-7.0-pyds: installs the python bindings
  • deepstream-7.0-tests: installs the C/C++ and Python sample applications for Deepstream SDK

Deepstream Setup

In order to include deepstream on your build you need to follow the next steps:

1. Add the packages that you require on your image in conf/local.conf

#Base package                                                                    
IMAGE_INSTALL:append = " deepstream-7.0"

#Optional python binding and samples                                             
IMAGE_INSTALL:append = " deepstream-7.0-pyds"

#Optional sample packages                                                        
IMAGE_INSTALL:append = " deepstream-tests"

DeepStream C/C++ examples

If you are running the sample for ssh, you first need to run "export DISPLAY=:0.0"

  • deepstream-app the reference application of Deepstream. Uses GStreamer to accept input from multiple sources. It can use a configuration file to enable/disable components and change their properties.

You can run it with the following commands:

deepstream-app -c /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app/<CONFIG_FILE>

where <CONFIG_FILE> has to be replaced by one of the configuration files of the list below:

  • source1_csi_dec_infer_resnet_int8.txt
  • source1_usb_dec_infer_resnet_int8.txt
  • source2_1080p_dec_infer-resnet_demux_int8.txt
  • source2_csi_usb_dec_infer_resnet_int8.txt
  • source2_dewarper_test.txt
  • source30_1080p_dec_infer-resnet_tiled_display_int8.txt
  • source30_1080p_dec_infer-resnet_tiled_display_int8.yml
  • source30_1080p_dec_preprocess_infer-resnet_tiled_display_int8.txt
  • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
  • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.yml
  • source4_1080p_dec_preprocess_infer-resnet_preprocess_sgie_tiled_display_int8.txt
  • source6_csi_dec_infer_resnet_int8.txt


You will see objects being detected in multiple sources, depending on the configuration file. You can select one source by pressing z on the console where the app is running, followed by the row index [0-9] and the column index [0-9] of the source. To restore the original view, press z again.

  • deepstream-test1: a simple example that uses DeepStream element to detect cars, persons, and bikes on a given single H.264 stream. The example uses the following pipeline: filesrc → decode→ nvstreammux → nvinfer (primary detector) → nvdsosd→ renderer.

You can run it with the following commands:

# move to the deepstream-test1 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-test1

# run the deepstream detection over the sample_720p.h264 file, but you can use any
# H.264 stream 
deepstream-test1-app /opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.h264


Cars being detected by a deepstream application
Result of running deepstream-test1-app with sample_720p.h264 file


  • deepstream-test2: a simple example that uses DeepStream elements on a given single H.264 stream to detect cars, persons, and bikes, tracks the car with a number, and classifies the cars by brand and color. The example uses the following pipeline: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.

You can run it with the following commands:

# move to the deepstream-test2 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-test2

# run the deepstream detection over the sample_720p.h264 file, but you can use any
# H.264 stream 
deepstream-test2-app /opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.h264


A street with cars and people used to test deepstream detection
Result of running deepstream-test2-app with sample_720p.h264 file


  • deepstream-test3: this example accepts one or more H.264/H.265 video streams as input. It creates a source bin for each input and connects the bins to an instance of the "nvstreammux" element, which forms the batch of frames. The batch of frames is fed to "nvinfer" for batched inferencing. The batched buffer is composited into a 2D tile array using "nvmultistreamtiler." The rest of the pipeline is similar to the deepstream-test1 sample. The inputs can be files or RTSP streams, shows side by side the videos detecting vehicles and persons

You can run it with the following commands:

# move to the deepstream-test3 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-test3

# run the deepstream example with 2 mp4 files
deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.mp4 


# or run the example with one mp4 file and one rtsp stream
deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h264.mp4 rtsp://192.168.1.4:7000/stream


A duplicated scene of a street with cars used as input for deepstream detection
Result of running deepstream-test3-app with sample_1080p_h264.mp4 twice


DeepStream Python examples

If you are running the sample for ssh, you first need to run "export DISPLAY=:0.0"

  • deepstream_test_1: a simple example that uses DeepStream element to detect cars, persons, and bikes on a given single H.264 stream. The example uses the following pipeline: filesrc → decode→ nvstreammux → nvinfer (primary detector) → nvdsosd→ renderer.

You can run it with the following commands:

# move to the deepstream-test1 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1

# run the deepstream detection over the sample_720p.h264 file, but you can use any
# H.264 stream 
python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.h264


A street with cars given as the output of a python deepstream application
Result of running deepstream_test_1.py with sample_720p.h264 file


  • deepstream_test_2: a simple example that uses DeepStream elements on a given single H.264 stream to detect cars, persons and bikes, tracks the car with a number and classifies the cars by brand and color. The example uses the following pipeline: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.

You can run it with the following commands:

# move to the deepstream-test2 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test2

# run the deepstream detection over the sample_720p.h264 file, but you can use any
# H.264 stream 
python3 deepstream_test_2.py /opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.h264


A streem with cars and people being detected by deepstream
Result of running deepstream_test_2.py with sample_720p.h264 file


  • deepstream_test_3: This sample accepts one or more H.264/H.265 video streams as input. It creates a source bin for each input and connects the bins to an instance of the "nvstreammux" element, which forms the batch of frames. The batch of frames is fed to "nvinfer" for batched inferencing. The batched buffer is composited into a 2D tile array using "nvmultistreamtiler." The rest of the pipeline is similar to the deepstream-test1 sample. The inputs can be files or RTSP streams, shows side by side the videos detecting vehicles and persons.

You can run it with the following commands:

# move to the deepstream-test3 source directory, the test needs the configuration file 
# in there and uses fixed relative locations
cd /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3

# run the deepstream example with 2 mp4 files
python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.mp4

# or run the example with one mp4 file and one RTSP stream
python3 deepstream_test_3.py file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h264.mp4 rtsp://192.168.1.4:7000/stream


Duplicated scene of a street with cars and people used by a deepstream example
Result of running deepstream_test_1.py with two mp4 files.



Previous: Adding NVIDIA Packages Index Next: Additional Topics