R2Inference - Building the library
Make sure you also check R2Inference's companion project: GstInference |
R2Inference |
---|
Introduction |
Getting started |
Supported backends |
Examples |
Model Zoo |
Contact Us |
|
R2Inference dependencies
R2Inference has the following dependencies:
- autoreconf
- automake
- pkg-config
- libtool
- cpputest
In Debian based systems, you may install the dependencies with the following command:
sudo apt-get install -y autoconf automake pkg-config libtool libcpputest-dev doxygen
You need to install the API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
- NCSDK installation instructions
- TensorFlow installation instructions
- TensorFlow-Lite installation instructions
Installing R2Inference library
Linux
Autotools
These instructions have been tested on:
- x86
- ARM64
To build and install r2inference you can run the following commands:
Configure Option | Description |
---|---|
--enable-edgetpu | Compile the library with EdgeTPU backend support |
--enable-ncsdk | Compile the library with NCSDK backend support |
--enable-tensorflow | Compile the library with TensorFlow backend support |
--enable-tflite | Compile the library with TensorFlow-Lite backend support |
The EdgeTPU backend has as a dependency the Tensorflow-lite backend, hence you need to enable it. |
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:. |
# NOTE: # These exports are only needed if you are using the TFLite backend. # They are NOT necessary if using any of the other backend. export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git cd r2inference ./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE make make check sudo make install
Meson
These instructions have been tested on:
- x86
To build and install r2inference you can run the following commands:
Configure Option | Description |
---|---|
-Denable-edgetpu=true | Compile the library with EdgeTPU backend support |
-Denable-tensorflow=true | Compile the library with Tensorflow backend support |
-Denable-tflite=true | Compile the library with TensorFlow Lite backend support |
-Denable-tensorrt=true | Compile the library with TensorRT backend support |
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:. |
# NOTE: # These exports are only needed if you are using the TFLite backend. # They are NOT necessary if using any of the other backend. export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git cd r2inference meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2 ninja -C build # Compile project ninja -C build test # Run tests sudo ninja -C build install # Install the library
Yocto
R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.
First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.
i.MX8 Yocto guide here.
In your Yocto sources folder, run the following command
git clone https://github.com/RidgeRun/meta-ridgerun.git
Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.
${BSPDIR}/sources/meta-ridgerun \
Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.
IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"
Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
Verify
You can verify the library with a simple application:
r2i_verify.cc
#include <iostream> #include <r2i/r2i.h> void PrintFramework (r2i::FrameworkMeta &meta) { std::cout << "Name : " << meta.name << std::endl; std::cout << "Description : " << meta.description << std::endl; std::cout << "Version : " << meta.version << std::endl; std::cout << "---" << std::endl; } int main (int argc, char *argv[]) { r2i::RuntimeError error; std::cout << "Backends supported by your system:" << std::endl; std::cout << "==================================" << std::endl; for (auto &meta : r2i::IFrameworkFactory::List (error)) { PrintFramework (meta); } return 0; }
You may build this example by running:
g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify
You can also check our examples page to get the examples included with the library running.
Troubleshooting
- After following TensorFlow Installation Instructions and you got below installation issue:
configure: *** checking feature: tensorflow *** checking for TF_Version in -ltensorflow... no configure: error: Couldn't find tensorflow [AUTOGEN][11:46:38][ERROR] Failed to run configure
The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/