R2Inference - Building the library
Make sure you also check R2Inference's companion project: GstInference |
R2Inference |
---|
Introduction |
Getting started |
Supported backends |
Examples |
Model Zoo |
Contact Us |
|
R2Inference dependencies
R2Inference has the following dependencies:
- pkg-config
- cpputest
- doxygen
Many backends also have these common dependencies:
- git
- curl
- unzip
Also, R2Inference makes use of the Meson build system.
In Debian based systems, you may install the dependencies with the following command:
sudo apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build pkg-config libcpputest-dev doxygen git curl unzip
Then, use pip3 to install the latest version of Meson directly from its repository.
sudo -H pip3 install git+https://github.com/mesonbuild/meson.git
You need to install the API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
- TensorFlow installation instructions
- TensorFlow-Lite installation instructions
- TensorRT installation instructions
- Edge TPU installation instructions
- ONNXRT installation instructions
- ONNXRT ACL installation instructions
Installing R2Inference library
Linux
These instructions have been tested on:
- x86
- ARM64
To build and install r2inference you can run the following commands:
Configure Option | Description |
---|---|
-Denable-coral=true | Compile the library with Coral Edge TPU backend support |
-Denable-tensorflow=true | Compile the library with Tensorflow backend support |
-Denable-tflite=true | Compile the library with TensorFlow Lite backend support |
-Denable-tensorrt=true | Compile the library with TensorRT backend support |
-Denable-onnxrt=true | Compile the library with ONNXRT backend support |
-Denable-onnxrt-acl=true | Compile the library with ONNXRT backend with Arm Compute Library (ACL) support |
-Denable-onnxrt-openvino=true | Compile the library with ONNXRT backend with OpenVINO support |
In case of use the ONNXRT backend (or any of the execution providers) it is necessary add the following flags to the build configuration: |
# NOTE: # These exports are only needed if you are building the ONNXRT backend. # They are NOT necessary if using any of the other backend. export ONNXRUNTIMEPATH=/PATH/ONNXRUNTIME/SRC/include/onnxruntime/ export CPPFLAGS="-I${ONNXRUNTIMEPATH}"
The Edge TPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it. Also is needed to add the following flags: |
# NOTE: # These exports are only needed if you are using the Edge TPU and TFLite backends. # They are NOT necessary if using any of the other backend. export TENSORFLOW_PATH='<path-to-tensorflow>' export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include -L${TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/linux_aarch64/lib"
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:. |
# NOTE: # These exports are only needed if you are using the TFLite backend. # They are NOT necessary if using any of the other backend. export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git cd r2inference meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2 ninja -C build # Compile the project ninja -C build test # Run tests sudo ninja -C build install # Install the library
Note: If you are building R2Inference in the Coral Dev Kit consider using ninja -C build -j 1
instead to avoid the compilation getting killed due to memory.
Yocto
R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.
First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.
i.MX8 Yocto guide here.
In your Yocto sources folder, run the following command
git clone https://github.com/RidgeRun/meta-ridgerun.git
Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.
${BSPDIR}/sources/meta-ridgerun \
Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.
IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"
Finally, build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
Verify
You can verify the library with a simple application:
r2i_verify.cc
#include <iostream> #include <r2i/r2i.h> void PrintFramework (r2i::FrameworkMeta &meta) { std::cout << "Name : " << meta.name << std::endl; std::cout << "Description : " << meta.description << std::endl; std::cout << "Version : " << meta.version << std::endl; std::cout << "---" << std::endl; } int main (int argc, char *argv[]) { r2i::RuntimeError error; std::cout << "Backends supported by your system:" << std::endl; std::cout << "==================================" << std::endl; for (auto &meta : r2i::IFrameworkFactory::List (error)) { PrintFramework (meta); } return 0; }
You may build this example by running:
g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify
You can also check our examples page to get the examples included with the library running.
Troubleshooting
- After following TensorFlow Installation Instructions and you got below installation issue:
configure: *** checking feature: tensorflow *** checking for TF_Version in -ltensorflow... no configure: error: Couldn't find tensorflow [AUTOGEN][11:46:38][ERROR] Failed to run configure
The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
Known issues
- If GstInference and R2Inference were built on Ubuntu 16.04, and the backend for TensorFlow and TensorFlow Lite was enabled; the building process will potentially have issues or present segmentation faults while using one of these backends.