GstInference and ONNXRT ACL backend
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
The ONNXRT ACL backend is an extension of the ONNXRT backend. It is based on the ACL Execution Provider available in ONNX Runtime. ACL stands for Arm Compute Library, which is an extensive library for optimized functions for the Arm Cortex-A family of CPU processors and the Arm Mali family of GPUs. The library allows accelerating common computer vision and machine learning operations through technologies like Arm Neon and OpenCL.
Installation
GstInference depends on the C++ API of ONNX Runtime. For installation steps, follow the steps in R2Inference/Building the library section.
Enabling the backend
To use the ONNXRT ACL backend on GstInference be sure to run the R2Inference configure with the flags -Denable-onnxrt=true
and -Denable-onnxrt-acl=true
. Then, use the property backend=onnxrt_acl
on the GstInference plugins. Please refer to this wiki page for more information.
Properties
This backend is an extension of the base ONNXRT backend, therefore the available properties are inherited from the base class. Check the ONNXRT ACL API Reference page for further information.