GstInference and TensorFlow-Lite backend
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
TensorFlow Lite is an open-source software library that is part of TensorFlow™. This provides a deep learning framework for on-device inference. Tensorflow lite models can be used on Android and IOS, and also can be used on systems like Raspberry Pi and Arm64-based boards.
To use the Tensorflow-Lite backend on Gst-Inference be sure to run the R2Inference configure with the flag -Denable-tflite=true
and use the property backend=tflite
on the Gst-Inference plugins. GstInference depends on the C++ API of Tensorflow-Lite.
Installation
GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in R2Inference/Building the library section.
TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.
Enabling the backend
To enable Tensorflow-Lite as a backend for GstInference you need to install R2Inference with TensorFlow-Lite support. To do this, use the option -Denable-tflite=true
while following this wiki
Generating a Graph
GstInference uses Tensorflow-lite models for inference. You can generate a tflite model from a checkpoint file, from a saved session, or converting a Tensorflow frozen graph model. For examples on how to generate a graph please check the section R2Inference/Supported_backends/TensorFlow-Lite to create or convert models on the R2Inference wiki guide.
Properties
TensorFlow Lite API Reference has full documentation of the Tensorflow-Lite C++ API. Gst-Inference uses only the C++ API of Tensorflow-Lite and R2Inference takes care of devices and loading the models.
The following syntax is used to change backend options on Gst-Inference plugins:
backend::<property>
For example to change the backend to use Tensorflow-Lite with the inceptionv4 plugin you need to run the pipeline like this:
gst-launch-1.0 \ inceptionv4 name=net model-location=graph_inceptionv4.tflite backend=tflite backend::allow-fp16=0 backend::number-of-threads=4 \ videotestsrc ! tee name=t \ t. ! queue ! videoconvert ! videoscale ! net.sink_model \ t. ! queue ! net.sink_bypass \ net.src_bypass ! fakesink
To learn more about the Tensorflow-Lite C++ API, please check the Tensorflow-Lite API section on the R2Inference sub wiki.
Tools
The TensorFlow Python API installation includes a tool named Tensorboard, that can be used to visualize a model. If you want some examples and a more complete description please check the Tools section on the R2Inference wiki.