Qualcomm Robotics RB5/RB6 - TensorFlow to DLC Conversion
snpe-tensorflow-to-dlc
The Snapdragon Neural Processing Engine (SNPE) SDK provides tools for converting machine learning models into a format optimized for execution on devices powered by Qualcomm Snapdragon processors. One of these tools is snpe-tensorflow-to-dlc, which converts TensorFlow models into the DLC (Deep Learning Container) format. This process is important for deploying AI models on mobile and edge devices, offering optimized performance for inference tasks. In this case, we are going to use SDK version snpe-2.5.0.4052.
Official documentation for this script: https://developer.qualcomm.com/sites/default/files/docs/snpe/tools.html#tools_snpe-tensorflow-to-dlc
Setup environment
- Before proceeding with the model conversion, remember to set up your environment correctly. It's necessary to have the
SNPE_ROOT
andTENSORFLOW_DIR
variables defined, as mentioned in the previous sections. To ensure a smooth experience, we recommend using a Python virtual environment specifically for working with the SNPE SDK. Initiating a session withsudo -s
within this environment may also be beneficial to address permission-related issues.
In the host set the following variables:
export SNPE_ROOT=<path_to_SDK_installation>/snpe-<version>
python3.6 -m pip show tensorflow | grep Location Location: <path_to_tensorflow_installation> export TENSORFLOW_DIR=<path_to_tensorflow_installation> export ANDROID_NDK_ROOT=<path_to_Android_Studio_installation>/Android/Sdk/ndk-bundle
Then, setup the environment for each framework:
source bin/envsetup.sh -t $TENSORFLOW_DIR source bin/envsetup.sh --tflite $TFLITE_DIR
In the RB5 set the following variables:
export MODELS_DIR=/data/misc/camera/ export VIDEO_DIR=/path/to/video/
- Additionally, when you run the script, you might encounter requirements or dependencies that need to be installed. Make sure to install these additional components as needed to avoid any interruptions in your workflow.
Run the dependency scripts:
cd $SNPE_ROOT source bin/dependencies.sh sudo apt install <package> # If you need to install any package
Python dependencies:
source bin/check_python_depends.sh python3.6 -m pip install <package==version> # If you need to install any package
- Before running the conversion tool, it's necessary to update the
PYTHONPATH
environment variable to include the path to the SNPE SDK's Python libraries. This step helps with the conversion script having access to all necessary dependencies. Execute the following command in your terminal:
export PYTHONPATH=$SNPE_ROOT/lib/python:$PYTHONPATH
Key Flags Explanation
Before diving into the conversion process, it's necessary to understand the key flags used in the snpe-tensorflow-to-dlc command for its most basic use:
- --input_dim: Specifies the input dimensions of the network's input layer(s). This flag is required for defining how the input data should be structured for the model.
- --input_network: Path to the TensorFlow model file to be converted. This can be a frozen graph .pb file, a pair of .meta and checkpoint files, or a SavedModel directory
- --out_node: Indicates the name(s) of the graph's output node(s), essentially defining the final output layer of your network. Multiple output nodes can be specified by providing multiple --out_node flags.
- --output_path: Specifies the path and name of the resulting DLC file. If not provided, the tool generates a DLC file with the same base name as the input model file.
For a deeper understanding of your model's architecture and to verify its characteristics before conversion, you can utilise the tool available at https://netron.app/. Netron is an interactive viewer that allows you to explore the layers, dimensions, and overall structure of neural network models. By uploading your TensorFlow model file to this website, you can visually inspect the details of your model. This ensures the accurate specification of input dimensions, output nodes, and other parameters that could be relevant to the conversion process. When you upload your model to neutron you will see something like the following:
For example, for the ssdlite_object_detection.tflite model, click the input node (green) and you will see some features such as
input_dim
(red). This node expects an input tensor with the shape [1, 300, 300, 3], which indicates that the input image should be 300x300 pixels in size and have 3 color channels (RGB). Moreover, the tensor should be of type uint8, which suggests that the pixel values will range from 0 to 255 for each color channel.

For example, for the mobilenet_v2_0.35_96_frozen.pb model, the
--out_node
is:

Conversion Example
- For this example, get the model here: https://zenodo.org/records/2266646 and search for mobilenet_v2_0.35_96_frozen.pb
- Run the following command in the
SNPE_ROOT
:
./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --input_dim 'input' 1,244,244,3 --input_network mobilenet_v2_0.35_96_frozen.pb --out_node MobilenetV2/Predictions/Reshape --output_path ./MobileNet_V2_pb.dlc
- Move the .dlc to
MODELS_DIR
in the RB5.
Testing with qtimlesnpe
It is important to ensure that the qtimlesnpe element was built using the same version of the SDK that was used to convert the model (Known Issues).
The following GStreamer command line can be used for testing, which loads the DLC model and performs inference on a video file:
gst-launch-1.0 filesrc location=$VIDEO_DIR/walking.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! qtivdec ! videoconvert ! video/x-raw,format=NV12 ! qtimlesnpe model=$MODELS_DIR/MobileNet_V2.dlc labels=$MODELS_DIR/imagenet_slim_labels.txt postprocessing="classification" runtime=1 ! queue ! qtioverlay ! autovideosink
snpe-tflite-to-dlc
The
snpe-tflite-to-dlc
tool is part of the Snapdragon Neural Processing Engine (SNPE) SDK, designed to convert TFLite models into the optimized DLC format. This conversion ensures efficient execution and leveraging the device's full AI capabilities.
Official documentation for this script: https://developer.qualcomm.com/sites/default/files/docs/snpe/tools.html#tools_snpe-tflite-to-dlc
Setup environment
- Update the PYTHONPATH environment variable:
export PYTHONPATH=$SNPE_ROOT/lib/python:$PYTHONPATH
- Install necessary Python packages:
pip install attrs pytest python3.6 -m pip install tflite==2.3.0
Key Flags Explanation
- --input_dim: Specifies the input dimensions of the network's input layers. Each input layer's name and dimensions must be provided in a specific format, including quotes to handle special characters or spaces.
- --input_network: Path to the source TFLite model file that you want to convert.
- --output_path: Specifies the file path for the converted DLC file. If not provided, the DLC file will be named after the TFLite file with a .dlc extension.
Conversion Example
- For this example, get the model here: https://github.com/google/mediapipe/tree/0.8.0/mediapipe/models and search for ssdlite_object_detection.tflite
- Here is a minimal example command to convert a TFLite model into a DLC file. Run the following command in the
SNPE_ROOT
:
./bin/x86_64-linux-clang/snpe-tflite-to-dlc --input_dim 'input' 1,320,320,3 --input_network ssdlite_object_detection.tflite --output_path ./ssdlite_object_detection.dlc
- Move the .dlc to
MODELS_DIR
in the RB5.
Known Issues
- Mismatch between qtimlesnpe and SNPE SDK Versions: When testing your DLC model with the qtimlesnpe element, if you encounter failures, it might be due to a version mismatch. Version discrepancies between the SDK used for building qtimlesnpe and the one used for model conversion can lead to unexpected behavior or errors. This is because specific versions of the SNPE SDK might introduce changes or optimizations that are not backward compatible. To avoid issues, verify and align the versions of the SNPE SDK used during the model conversion process and the qtimlesnpe element compilation.