NVIDIA Jetson Xavier - How to access and run the NVIDIA Deep Learning Accelerator
The NVDLA access in the Xavier is done through the TensorRT as described here. NVIDIA provides a virtual platform in order to simulate the behavior of a system with a deep learning accelerator.
Building Simulator from Scratch
Since the NVDLA is an open architecture they provide in their repository the instructions to build the system from the ground up. This virtual simulator will already include the pre-built kernel and user modules, this can also be cross-compiled using the instructions here.
Running the Simulator from Docker
On a computer with Docker run the following commands:
#Pull the container image docker pull nvdla/vp docker run -it -v /home:/home nvdla/vp #The home repositories will be replicated in the container and the host cd /usr/local/nvdla aarch64_toplevel -c aarch64_nvdla.lua #This will run the actual simulator inside the Docker container
Running a Model Inside the Simulator
Compiling a model
The NVDLA SW repository has in the prebuilt/linux
an executable that can modify Caffe models into NVDLA loadable.
A sample Prototxt and Caffe model can be downloaded from here.
./nvdla_compiler --prototxt <prototxt_file> --caffemodel <caffe_model_file>
Running the Sample
Run the sample
./nvdla_runtime --loadable <compiled_model> --image <image> --normalize 1.0 --mean <red_channel_mean>,<green_channel_mean>,<blue_channel_mean> --rawdump
By using the rawdump
option in the previous command it will create a output.dimg file which contains the results, run the following command to sort by the confidence rating; the confindence numbers must be cross-referenced to the indexes of the labels.
cat output.dimg | sed "s#\ #\n#g" | cat -n | sort -g -k2,2