GstInference/Supported backends/Tensorflow-Lite: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
 
mNo edit summary
 
Line 1: Line 1:
<noinclude>
<noinclude>
{{GstInference/Head|previous=Supported backends/TensorFlow|next=Supported backends/EdgeTPU|metakeywords=GstInference backends, Tensorflow, Google, Jetson-TX2, Jetson-TX1, Xavier, NVIDIA, Deep Neural Networks, DNN, DNN Model, Neural Compute API}}
{{GstInference/Head|previous=Supported backends/TensorFlow|next=Supported backends/EdgeTPU}}
</noinclude>
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
<!-- If you want a custom title for the page, un-comment and edit this line:
Line 14: Line 14:
==Installation==
==Installation==


GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in [https://developer.ridgerun.com/wiki/index.php?title=R2Inference/Getting_started/Building_the_library| R2Inference/Building the library] section.
GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in [R2Inference/Getting_started/Building_the_library| R2Inference/Building the library] section.


TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.
TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.
Line 24: Line 24:
==Generating a Graph==
==Generating a Graph==


GstInference uses Tensorflow-lite models for inference. You can generate a tflite model from a checkpoint file, from a saved session, or converting a Tensorflow frozen graph model. For examples on how to generate a graph please check the section [https://developer.ridgerun.com/wiki/index.php?title=R2Inference/Supported_backends/TensorFlow-Lite R2Inference/Supported_backends/TensorFlow-Lite] to create or convert models on the [[R2Inference]] wiki guide.
GstInference uses Tensorflow-lite models for inference. You can generate a tflite model from a checkpoint file, from a saved session, or converting a Tensorflow frozen graph model. For examples on how to generate a graph please check the section [[R2Inference/Supported_backends/TensorFlow-Lite |R2Inference/Supported_backends/TensorFlow-Lite]] to create or convert models on the [[R2Inference]] wiki guide.


==Properties==
==Properties==

Latest revision as of 13:25, 29 November 2024



Previous: Supported backends/TensorFlow Index Next: Supported backends/EdgeTPU





TensorFlow Lite is an open-source software library that is part of TensorFlow™. This provides a deep learning framework for on-device inference. Tensorflow lite models can be used on Android and IOS, and also can be used on systems like Raspberry Pi and Arm64-based boards.

To use the Tensorflow-Lite backend on Gst-Inference be sure to run the R2Inference configure with the flag -Denable-tflite=true and use the property backend=tflite on the Gst-Inference plugins. GstInference depends on the C++ API of Tensorflow-Lite.

Installation

GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in [R2Inference/Getting_started/Building_the_library| R2Inference/Building the library] section.

TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.

Enabling the backend

To enable Tensorflow-Lite as a backend for GstInference you need to install R2Inference with TensorFlow-Lite support. To do this, use the option -Denable-tflite=true while following this wiki

Generating a Graph

GstInference uses Tensorflow-lite models for inference. You can generate a tflite model from a checkpoint file, from a saved session, or converting a Tensorflow frozen graph model. For examples on how to generate a graph please check the section R2Inference/Supported_backends/TensorFlow-Lite to create or convert models on the R2Inference wiki guide.

Properties

TensorFlow Lite API Reference has full documentation of the Tensorflow-Lite C++ API. Gst-Inference uses only the C++ API of Tensorflow-Lite and R2Inference takes care of devices and loading the models.

The following syntax is used to change backend options on Gst-Inference plugins:

backend::<property>

For example to change the backend to use Tensorflow-Lite with the inceptionv4 plugin you need to run the pipeline like this:

gst-launch-1.0 \
inceptionv4 name=net model-location=graph_inceptionv4.tflite backend=tflite backend::allow-fp16=0 backend::number-of-threads=4 \
videotestsrc ! tee name=t \
t. ! queue ! videoconvert ! videoscale ! net.sink_model \
t. ! queue ! net.sink_bypass \
net.src_bypass ! fakesink

To learn more about the Tensorflow-Lite C++ API, please check the Tensorflow-Lite API section on the R2Inference sub wiki.

Tools

The TensorFlow Python API installation includes a tool named Tensorboard, that can be used to visualize a model. If you want some examples and a more complete description please check the Tools section on the R2Inference wiki.


Previous: Supported backends/TensorFlow Index Next: Supported backends/EdgeTPU