GstInference and ONNXRT OpenVINO backend
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
The ONNXRT OpenVINO backend is an extension of the ONNXRT backend. This backend is based on Intel's OpenVINO toolkit support available in ONNX Runtime. OpenVINO offers a boost in performance through optimizations for common computer vision and deep learning workloads on Intel's hardware (CPUs, Movidius USB sticks, MyriadX VPUs, and FPGAs).
Installation
GstInference depends on the C++ API of ONNX Runtime. For installation steps, follow the steps in R2Inference/Building the library section.
Enabling the backend
To use the ONNXRT OpenVINO backend on GstInference, be sure to run the R2Inference configure with the flags -Denable-onnxrt=true
and -Denable-onnxrt-openvino=true
. Then, use the property backend=onnxrt_openvino
on the GstInference plugins. Please refer to R2Inference/Building the library for more information.
Properties
This backend is an extension of the base ONNXRT backend, therefore some of the available properties are inherited from the base class. Check the ONNXRT OpenVINO API Reference page for further information.
This backend also includes a new property called hardware-id
, which enables different hardware devices to target the execution of the inference of the model. The available options should be the same as those currently supported in ONNX Runtime, these are the current ones:
CPU_FP32
: Default. Intel® CPUsGPU_FP32
: Intel® Integrated GraphicsGPU_FP16
: Intel® Integrated Graphics with FP16 quantization of modelsMYRIAD_FP16
: Intel® MovidiusTM USB sticksVAD-M_FP16
: Intel® Vision Accelerator Design based on 8 MovidiusTM MyriadX VPUsVAD-F_FP32
: Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA
Make sure to check the corresponding R2Inference documentation for how to set up OpenVINO installation for different hardware devices.