NVIDIA Jetson Xavier - Building TensorRT API examples
The following section demonstrates how to build and use NVIDIA samples for the TensorRT C++ API and Python API
C++ API
First you need to build the samples. TensorRT is installed in /usr/src/tensorrt/samples
by default. To build all the c++ samples run:
cd /usr/src/tensorrt/samples sudo make -j4 cd ../bin ./<sample_name>
After building the samples directory, binaries are generated in the In the /usr/src/tensorrt/bin
directory, and they are named in snake_case
. On the other hand, the source code is located in the samples directory under a second-level directory named like the binary but in camelCase
. Some samples require some extra steps like downloading a model or a frozen graph, those steps are enumerated in the README files on the source folder. Inside the following table you can find the sample binary names and descriptions:
Sample | Description | Notes |
---|---|---|
sample_mnist |
|
The Caffe model was trained with the MNIST data set. To test the engine, this example picks a handwritten digit at random and runs an inference with it. This sample outputs the ASCII rendering of the input image and the most likely digit associated with that image. |
sample_mnist_api |
|
This sample builds a model from scratch using the C++ API. For a more detailed guide on how to do this, you can visit this topic on the official documentation. This sample does not train the model. It just loads the pre-trained weights. |
sample_uff_mnist |
|
This sample uses a pre-trained TensorFlow model that was frozen and converted to UFF This sample outputs the inference results and ASCII rendering of every digit from 0 to 9. |
sample_onnx_mnist |
|
See this for a detailed ONNX parser configuration guide. |
sample_googlenet |
|
See this for details on how to set the half-precision mode and network profiling. |
sample_char_rnn |
|
The network is trained for predictive text completion with the Treebank-3 dataset |
sample_int8 |
|
INT8 inference is available only on GPUs with compute capability 6.1 or 7.x. The advantage of using INT8 is that the inference and training are faster, but it requires an investment to determine how best to represent the weights and activations as 8-bit integers. The sample calibrates for MNIST but can be used to calibrate other networks. Run the sample on MNIST with: |
sample_plugin |
|
A limiting factor when using the Caffe and Tensorflow parser is that using not supported layers will result in an error. This sample creates a custom layer and adds it to the parser to counteract that problem. The custom layer is a replacement for the |
sample_nmt |
|
This sample requires more setup to test. you should follow the guide on For more information about NMT models this is a great resource. |
sample_fasterRCNN |
|
The model used in this example is too large to be included with the package, to download it follow the guide on This model is based on this paper The original Caffe model was modified to include RPN and ROIPooling layers |
sample_uff_ssd |
|
The model used in this example is too large to be included with the package, to download it follow the guide on |
sample_movielens |
|
Each input of the model consists of a userID and a list of movieIDs. The network predicts the highest rated movie for each user. The sample uses a set of 32 users with 100 movies each and compares its prediction with the ground truth. |
Python API
You can find the Python samples in the /usr/src/tensorrt/samples/python
directory. Every Python sample includes a README.md and requirements.txt file. To run one of the Python samples, the process typically involves two steps:
python -m pip install -r requirements.txt #Install the sample requirements python sample.py #run the sample
The available samples are:
- introductory_parser_samples
- end_to_end_tensorflow_mnist
- network_api_pytorch_mnist
- fc_plugin_caffe_mnist
- uff_custom_plugin
Python API isn't supported on Xavier at this time, and the Python API samples are not included with Xavier's TensorRT installation. To get these samples you need to install TensorRT on the host. |