DeepStream Reference Designs - Getting Started - Evaluating the Project
DeepStream Reference Designs |
---|
Getting Started |
Project Architecture |
Reference Designs |
Customizing the Project |
Contact Us |
Requesting the Code
|
Project Installation
TAO Pre-trained Models
The DeepStream inference process performed in this project relies on the use of Car Detection and License Plate Detection models. The models included are part of the NVIDIA TAO Toolkit version 3.0. If you want more information about the DeepStream pipeline configuration and the models used, you can read the following example of DeepStream Model. This project provides an automation script that handles the process of downloading the respective models and installing a Custom Parser required for the License Plate Recognition model. All you have to do is execute the following command from the top level of the project directory:
$ ./download_models.sh
Running the above script will create a directory called models/ at the root of the project, which should have the following structure:
├── custom_lpr_parser │ ├── Makefile │ ├── nvinfer_custom_lpr_parser.cpp └── libnvdsinfer_custom_impl_lpr.so ├── lpd │ ├── usa_lpd_cal.bin │ ├── usa_lpd_label.txt │ └── usa_pruned.etlt ├── lpr │ └── us_lprnet_baseline18_deployable.etlt └── trafficcamnet ├── resnet18_trafficcamnet_pruned.etlt └── trafficnet_int8.txt
The system provides model configuration files for each supported platform, where that information can be verified in the APLVR Supported Platforms section. These files refer to the paths where the models generated are located, so it is recommended NOT to move the location of these files. As can be seen in the generated directory structure, for each of the models, calibration files (when applicable) and files in etlt format are downloaded, which are the files exported from the NVIDIA Transfer Learning Toolkit. The first time the project is executed, each of the model's engines will be generated based on the parameters established in the configuration files, so depending on the platform used, this operation may take a few minutes. From the second run, the configuration files will reference each of the generated models, so it will simply load them without delay.
The engine files of each model will have an associated name, which will depend on the configuration parameters used, for example, when generating the Trafficcamnet network model using a calibration of INT8, a gpu-id of 0, and a batch size of 1, the file name will be:
resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
The mentioned parameters of batch-size, gpu-id, INT8-calib, etc, can be found in the configuration files of each model within the config_files/ directory of the project. After the first run, you can check the name of the generated models for each network inside the models/ directory.
Setup Tools
This project was developed using Python's Setuptools build system. First, it is necessary to install the Python dependencies required by the system, for this the Package Installer for Python (pip) can be used, executing the following command from the root of the project directory:
$ pip3 install .
Pip will use the information contained in the setup.py file to install the respective modules. Next, the following command is executed, which allows us to install the project package as a Python dist-package, in addition to installing the program's entry point in the system, so that we can access it as if it were a bash command, which is required by the project execution script.
$ sudo python3 setup.py install
The above command should also be run from the root of the project directory since that is where the setup.py file is located.
Testing the Project
Important Note: Before you test the project, we strongly recommend that you check the config params located in the aplvr.yaml file, especially the config files path of the DeepStream section, where the models configuration will be loaded, and the RabbitMQ params, where you can set the message broker server IP address. In the following APLVR Config file section, you can check more information about the aplvr.yaml and the rest of the files are located in the config files directory.
Web Dashboard
By default, the project has a functionality of a Web Dashboard, where you can see in real-time the logs of the information generated. The configuration of this functionality is done through the actions section of the aplvr.yaml config file. For instance, through the "url" parameter you can indicate the IP address and the port where the API that hosts the web page will be launched:
actions: - name: "dashboard" url: "http://192.168.55.100:4200/dashboard"
In the example above, the web dashboard will be listening on the local address of 192.168.55.100, and port 4200. You can change those two values, as long as you respect the API endpoint called /dashboard . Make sure there are no firewall problems on the indicated IP address and port so that the communication can take place without any problem. If you want to know more about this functionality and other actions included in this reference design, you can refer to the APLVR Reference Design Wiki.
To run the web dashboard, open another terminal and navigate within the project directory to the following path:
$ cd src/aplvr/dashboard/
And finally, execute the following command:
$ python3 web_app.py
Important Note: Dashboard execution does not need to be done within the Jetson board, you can use another computer as the dashboard host, as long as you set the correct IP address and port number to establish the respective communication. Also, in case you don't want to use this functionality, you can simply remove it from the action parameters of the aplvr.yaml configuration file.
RTSP Protocol
This application uses the RTSP protocol to receive the streaming information data coming from the cameras of the system. For testing purposes, we provide some test videos, in the context of the APLVR application, that simulate cameras in a parking lot. The videos can be downloaded at the following address: APLVR Test Videos. Inside the drive folder, there are three videos called: entrance, exit, and sectorA, which refer to each of the parking sectors.
For the execution of videos using the RTSP protocol, you can use the methods provided by the VLC media player. For example, if you want to start a forever loop broadcast on the IP address 192.168.55.100, using the video samples, you can run the following commands on separate terminal instances:
$ cvlc -vvv --loop entrance.mp4 --sout='#gather:rtp{sdp=rtsp://192.168.55.100:8554/videoloop}' --sout-keep --sout-all
$ cvlc -vvv --loop exit.mp4 --sout='#gather:rtp{sdp=rtsp://192.168.55.100:8555/videoloop}' --sout-keep --sout-all
$ cvlc -vvv --loop sectorA.mp4 --sout='#gather:rtp{sdp=rtsp://192.168.55.100:8556/videoloop}' --sout-keep --sout-all
Please note that in each of the transmissions that are started, different port numbers are being used: 8554, 8555, and 8556, since each one of them will host the RTSP transmission of the videos. You are free to change the values of the IP address and the ports used, as long as you verify that the address where the transmission is made does not have firewall communication problems or something that prevents the system from receiving the transmission signal. Also, verify that the RTSP address used matches the stream's URL parameters in the aplvr.yaml configuration file:
streams: - id: "Entrance" url: "rtsp://192.168.55.100:8554/videoloop" triggers: - "entrance_exit_trigger" - id: "Exit" url: "rtsp://192.168.55.100:8555/videoloop" triggers: - "entrance_exit_trigger" - id: "SectorA" url: "rtsp://192.168.55.100:8556/videoloop" triggers: - "parking_trigger"
The section shown previously is the one that comes by default in the project. In addition, check that there is a match with the parameters of the entrance, exit, and sectorA, with respect to the transmissions used, so that the information received makes sense during the project execution.
Code Execution
This project provides you with an automation script, which handles the process initialization and termination tasks required by the application. APLVR Reference Design depends on its operation, that the RabbitMQ server is up and that the Gstreamer Daemon is running. Therefore, to simplify the work when executing the application, the script does this work for you, in addition, it is responsible for capturing any interruption detected during the execution of the system, to cleanly end the processes that have been started. The script is named run_aplvr.sh , it is located at the top level of the project directory, and all you have to do is run the script as shown below:
$ sudo ./run_aplvr.sh
It is necessary to execute the script with privileged permissions since it is a requirement of the command that starts the RabbitMQ server.
Project Installation In Docker Container
In this section we are going to explain the steps of how to install the project inside a docker container.
Important Note: All this steps need to be executed in a Jetson board.
Installation Steps
1- First verify if you have installed Docker.You can check the docker version installed on the Jetson with the following command:
$ docker --version Docker version 20.10.2, build 20.10.2-0ubuntu1~18.04.2
2- Install the NVIDIA Container Runtime package. This package is necessary because the container will need to enable the deepstream libraries inside the container. You can install it with the following command:
$ sudo apt install nvidia-container-runtime
3- In this step you will start a container with the project inside of it. To start the container for the first time use the following command:
$ sudo docker run -it --runtime nvidia -v /tmp/argus_socket:/tmp/argus_socket --network=host --name=AVR_Container AVR /bin/bash
In the previous command in the flag --name= we gived the name AVR_Container but you can give it the name that you want to the container.
The docker run command only need to be executed once. If you close the container and want to open it again just use the following commands:
$ sudo docker start AVR_Container $ sudo docker attach AVR_Container
4- Finally you just need to configure correctly all the parameters of the aplvr.yaml file as indicated in Testing the Project section. Also we recommend to use the 127.0.0.1 IP, in the RabbitMQ url in the aplvr.yaml file. Then just execute the project with the following command:
$ ./run_aplvr.sh InContainer
For more information about this command please refer to Code Execution section.