GstInference and ONNXRT OpenVINO backend

From RidgeRun Developer Wiki



Previous: Supported backends/ONNXRT ACL Index Next: Metadatas






The ONNXRT OpenVINO backend is an extension of the ONNXRT backend. This backend is based on Intel's OpenVINO toolkit support available in ONNX Runtime. OpenVINO offers a boost in performance through optimizations for common computer vision and deep learning workloads on Intel's hardware (CPUs, Movidius USB sticks, MyriadX VPUs, and FPGAs).

Installation

GstInference depends on the C++ API of ONNX Runtime. For installation steps, follow the steps in R2Inference/Building the library section.

Enabling the backend

To use the ONNXRT OpenVINO backend on GstInference, be sure to run the R2Inference configure with the flags -Denable-onnxrt=true and -Denable-onnxrt-openvino=true . Then, use the property backend=onnxrt_openvino on the GstInference plugins. Please refer to R2Inference/Building the library for more information.

Properties

This backend is an extension of the base ONNXRT backend, therefore some of the available properties are inherited from the base class. Check the ONNXRT OpenVINO API Reference page for further information.

This backend also includes a new property called hardware-id, which enables different hardware devices to target the execution of the inference of the model. The available options should be the same as those currently supported in ONNX Runtime, these are the current ones:

  • CPU_FP32: Default. Intel® CPUs
  • GPU_FP32: Intel® Integrated Graphics
  • GPU_FP16: Intel® Integrated Graphics with FP16 quantization of models
  • MYRIAD_FP16: Intel® MovidiusTM USB sticks
  • VAD-M_FP16: Intel® Vision Accelerator Design based on 8 MovidiusTM MyriadX VPUs
  • VAD-F_FP32: Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA

Make sure to check the corresponding R2Inference documentation for how to set up OpenVINO installation for different hardware devices.


Previous: Supported backends/ONNXRT ACL Index Next: Metadatas