Yocto Support for NVIDIA Jetson Platforms - Deepstream
Yocto Support for NVIDIA®Jetson™ |
---|
Setting up Yocto |
Flashing Jetson Platform |
Accessing the Board |
Adding NVIDIA Packages |
DeepStream |
Additional Topics |
FAQ |
Contact Us |
Complete the steps in Adding NVIDIA Packages page, before continuing with this section. |
NVIDIA Docker Container
NVIDIA Docker Setup
Use the same Yocto branch defined at Setting_up_Yocto section |
In addition to the basic Yocto and the meta-tegra layers, you will need the meta-virtualization layer and the meta-oe, meta-networking, meta-filesystems, and meta-python layers from the meta-openembedded repository.
1. Download repositories to the Yocto working directory
cd $YOCTO_DIR git clone https://git.yoctoproject.org/git/meta-virtualization cd meta-virtualization git checkout $BRANCH
cd $YOCTO_DIR git clone https://git.openembedded.org/meta-openembedded cd meta-openembedded git checkout $BRANCH
2. Add layers to conf/bblayers.conf
- Run below commands at the terminal
cd $YOCTO_DIR/build bitbake-layers add-layer ../meta-openembedded/meta-oe/ bitbake-layers add-layer ../meta-openembedded/meta-python/ bitbake-layers add-layer ../meta-openembedded/meta-networking/ bitbake-layers add-layer ../meta-openembedded/meta-filesystems/ bitbake-layers add-layer ../meta-virtualization/
3. Add the following support packages needed for Docker compatibility
Include settings for next steps in $YOCTO_DIR/build/conf/local.conf
GCCVERSION = "8.%"
4. Add Docker packages and virtualization compatibility
In your build/local.conf file add the following lines:
#Base packages IMAGE_INSTALL_append = " cuda-samples tensorrt cudnn libvisionworks gstreamer1.0-plugins-nvvideo4linux2" #Support packages for docker support IMAGE_INSTALL_append = " nvidia-docker nvidia-container-runtime cudnn-container-csv tensorrt-container-csv libvisionworks-container-csv" DISTRO_FEATURES_append = " ldconfig virtualization"
Note: The package-csv like recipes are needed by the nvidia-container-runtime to setup correctly the libraries when creating the docker environment. |
DeepStream Setup
NVIDIA has several containers available at the NGC Platform. DeepStream support is available through containers using nvidia-docker on Jetson systems. More information about the DeepStream image for L4T and Jetson Devices can be found in DeepStream 6.0.
The deepstream image requires:
- Jetson device running L4T r32.4.3
- At least JetPack 4.4 (dunfell-l4t-r32.4.3 branch on meta-tegra )
Before you continue, you need to follow the NVIDIA Docker Setup section of this wiki if you haven't already.
Jetpack 4.4 uses by default GStreamer therefore when using Deepstream Docker it requests the host plugins built with GStreamer 1.14, however, Dunfell branch uses GStreamer 1.16. You will need to request Yocto to use recipes for the 1.14 version by adding the following line to your local.conf file:
These GStreamer 1.14 recipes are included on the contrib meta-layer that you added before. |
require conf/include/gstreamer-1.14.conf
After the image has been generated with the Docker and GStreamer 1.14 support, flash run the following commands on the target:
1. Login into Jetson board and download the docker image
docker pull nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-base
Note: If you are having problems pulling the container, set the date and time correctly using the sudo date command in the terminal. |
2. Allow external applications to connect to the host's X display
xhost +
3. Run the docker container using the nvidia-docker (use the desired container tag in the command line below):
sudo docker run -it --rm --net=host --runtime nvidia -w /opt/nvidia/deepstream/deepstream-5.0 nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples
Or if you get an error about the display support:
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples
Connect a monitor to the board, even if you are not using the graphic interface. A physical display is needed to show the contents of the DeepStream apps. |
Yocto Recipes
These recipes are NOT included in the main meta-tegra repository yet. |
Recipes Description
meta-tegra includes two recipes for deepstream support: deepstream-5.0 and deepstream-python-apps
- deepstream-5.0: deepstream recipe includes the NVIDIA DeepStream SDK support distributed on several packages:
- deepstream-5.0: installs the DeepStream SDK prebuilt libraries and GStreamer plugins.
- deepstream-5.0-samples: includes the NVIDIA DeepStream SDK prebuilt sample application binaries and sample's models and configuration files.
- deepstream-5.0-python: installs the python binding
- deepstream-5.0-sources: installs the source code included on the Deepstream SDK at /op/nvidia/deepstream
- deepstream-python-apps: deepstream-python-apps recipe installs the python sample applications for Deepstream SDK
Deepstream Setup
In order to include deepstream on your build you need to follow the next steps:
1. Include meta-tegra to the conf/bblayer.conf at the top of the list. It is important that you add it to the top of the list because of python. Python binding is built for python-3.6.9 (ubuntu 18.04 installed version), so we need to include this version of python on meta-tegra to make it compatible, this order on the bblayer gives priority to the python classes on the meta-tegra layer.
BBLAYERS ?= " \ /home/${USER}/yocto-tegra/meta-tegra \ /home/${USER}/yocto-tegra/meta-tegra/contrib \ /home/${USER}/yocto-tegra/poky-dunfell/meta \ /home/${USER}/yocto-tegra/poky-dunfell/meta-poky \ /home/${USER}/yocto-tegra/poky-dunfell/meta-yocto-bsp \ "
2. Add the packages that you require on your image in conf/local.conf
#Base package IMAGE_INSTALL_append = " deepstream-5.0" #Optional sample packages IMAGE_INSTALL_append = " deepstream-5.0-samples" #Optional python binding and samples IMAGE_INSTALL_append = " deepstream-5.0-python deepstream-python-apps" #Optional source code IMAGE_INSTALL_append = " deepstream-5.0-sources"
DeepStream C/C++ examples
Connect a monitor to the board, even if you are not using the graphic interface. A physical display is needed to show the contents of the DeepStream apps. |
If you are running the sample for ssh, you first need to run "export DISPLAY=:0.0"
- deepstream-app the reference application of Deepstream. Uses GStreamer to accept input from multiple sources. It can use a configuration file to enable/disable components and change their properties.
You can run it with the following commands:
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/<CONFIG_FILE>
where <CONFIG_FILE> has to be replaced by one of the configuration files of the table below:
Device | <CONFIG_FILE> |
---|---|
TX1 | source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt |
TX2 | source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt |
XAVIER | source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt |
NANO | source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt |
You will see objects being detected in multiple sources, depending on the configuration file. You can select one source by pressing z on the console where the app is running, followed by the row index [0-9] and the column index [0-9] of the source. To restore the original view, press z again.
- deepstream-test1: a simple example that uses DeepStream element to detect cars, persons, and bikes on a given single H.264 stream. The example uses the following pipeline: filesrc → decode→ nvstreammux → nvinfer (primary detector) → nvdsosd→ renderer.
You can run it with the following commands:
# move to the deepstream-test1 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test1 # run the deepstream detection over the sample_720p.h264 file, but you can use any # H.264 stream deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
- deepstream-test2: a simple example that uses DeepStream elements on a given single H.264 stream to detect cars, persons, and bikes, tracks the car with a number, and classifies the cars by brand and color. The example uses the following pipeline: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.
You can run it with the following commands:
# move to the deepstream-test2 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test2 # run the deepstream detection over the sample_720p.h264 file, but you can use any # H.264 stream deepstream-test2-app /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
- deepstream-test3: this example accepts one or more H.264/H.265 video streams as input. It creates a source bin for each input and connects the bins to an instance of the "nvstreammux" element, which forms the batch of frames. The batch of frames is fed to "nvinfer" for batched inferencing. The batched buffer is composited into a 2D tile array using "nvmultistreamtiler." The rest of the pipeline is similar to the deepstream-test1 sample. The inputs can be files or RTSP streams, shows side by side the videos detecting vehicles and persons
You can run it with the following commands:
# move to the deepstream-test2 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3 # run the deepstream example with 2 mp4 files deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 # or run the example with one mp4 file and one rtsp stream deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 rtsp://192.168.1.4:7000/stream
DeepStream Python examples
Connect a monitor to the board, even if you are not using the graphic interface. A physical display is needed to show the contents of the DeepStream apps. |
If you are running the sample for ssh, you first need to run "export DISPLAY=:0.0"
- deepstream_test_1: a simple example that uses DeepStream element to detect cars, persons, and bikes on a given single H.264 stream. The example uses the following pipeline: filesrc → decode→ nvstreammux → nvinfer (primary detector) → nvdsosd→ renderer.
You can run it with the following commands:
# move to the deepstream-test1 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1 # run the deepstream detection over the sample_720p.h264 file, but you can use any # H.264 stream python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
- deepstream_test_2: a simple example that uses DeepStream elements on a given single H.264 stream to detect cars, persons and bikes, tracks the car with a number and classifies the cars by brand and color. The example uses the following pipeline: filesrc→ decode→ nvstreammux→ nvinfer (primary detector)→ nvtracker→ nvinfer (secondary classifier)→ nvdsosd → renderer.
You can run it with the following commands:
# move to the deepstream-test2 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test2 # run the deepstream detection over the sample_720p.h264 file, but you can use any # H.264 stream python3 deepstream_test_2.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
- deepstream_test_3: This sample accepts one or more H.264/H.265 video streams as input. It creates a source bin for each input and connects the bins to an instance of the "nvstreammux" element, which forms the batch of frames. The batch of frames is fed to "nvinfer" for batched inferencing. The batched buffer is composited into a 2D tile array using "nvmultistreamtiler." The rest of the pipeline is similar to the deepstream-test1 sample. The inputs can be files or RTSP streams, shows side by side the videos detecting vehicles and persons.
You can run it with the following commands:
# move to the deepstream-test3 source directory, the test needs the configuration file # in there and uses fixed relative locations cd /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test3 # run the deepstream example with 2 mp4 files python3 deepstream_test_3.py file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 # or run the example with one mp4 file and one RTSP stream python3 deepstream_test_3.py file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 rtsp://192.168.1.4:7000/stream