R2Inference/Getting started/Building the library: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
 
(22 intermediate revisions by 6 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{R2Inference/Head|previous=Getting started/Getting the code|next=Supported backends|keywords=R2Inference library,R2Inference library dependencies,Installing R2Inference library ,verifying R2Inference library}}
{{R2Inference/Head|previous=Getting started/Getting the code|next=Supported backends|metakeywords=R2Inference library,R2Inference library dependencies,Installing R2Inference library,verifying R2Inference library}}
</noinclude>
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
<!-- If you want a custom title for the page, un-comment and edit this line:
Line 10: Line 10:
R2Inference has the following dependencies:
R2Inference has the following dependencies:


* autoreconf
* automake
* pkg-config
* pkg-config
* libtool
* cpputest
* cpputest
* doxygen
Many backends also have these common dependencies:
* git
* curl
* unzip
Also, R2Inference makes use of the Meson build system.


In Debian based systems, you may install the dependencies with the following command:
In Debian based systems, you may install the dependencies with the following command:


<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
sudo apt-get install -y autoconf automake pkg-config libtool libcpputest-dev doxygen
sudo apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build pkg-config libcpputest-dev doxygen git curl unzip
</syntaxhighlight>
 
Then, use '''pip3''' to install the latest version of '''Meson''' directly from its repository.
<syntaxhighlight lang=bash>
sudo -H pip3 install git+https://github.com/mesonbuild/meson.git
</syntaxhighlight>
</syntaxhighlight>


You need to install the '''API''' for at least one of our [[R2Inference/Supported_backends|supported backends]] in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
You need to install the '''API''' for at least one of our [[R2Inference/Supported_backends|supported backends]] in order to build R2inference. Follow these links for instructions on how to install your preferred backend:


*[[R2Inference/Supported_backends/NCSDK#Installation | NCSDK installation instructions]]
*[[R2Inference/Supported_backends/TensorFlow#Installation | TensorFlow installation instructions]]
*[[R2Inference/Supported_backends/TensorFlow#Installation | TensorFlow installation instructions]]
*[[R2Inference/Supported_backends/TensorFlow-Lite#Installation | TensorFlow-Lite installation instructions]]
*[[R2Inference/Supported_backends/TensorFlow-Lite#Installation | TensorFlow-Lite installation instructions]]
*[[R2Inference/Supported_backends/TensorRT#Installation | TensorRT installation instructions]]
*[[R2Inference/Supported_backends/EdgeTPU#Installation | Edge TPU installation instructions]]
*[[R2Inference/Supported_backends/ONNXRT#Installation | ONNXRT installation instructions]]
*[[R2Inference/Supported_backends/ONNXRT ACL#Installation | ONNXRT ACL installation instructions]]


==Installing R2Inference library==
==Installing R2Inference library==
=== Linux ===
=== Linux ===
==== Autotools  ====


These instructions have been tested on:
These instructions have been tested on:
Line 47: Line 58:
     </tr>
     </tr>
     <tr>
     <tr>
       <td>--enable-edgetpu</td>
       <td>-Denable-coral=true</td>
       <td>Compile the library with EdgeTPU backend support</td>
       <td>Compile the library with Coral Edge TPU backend support</td>
     </tr>
     </tr>
     <tr>
     <tr>
       <td>--enable-ncsdk</td>
       <td>-Denable-tensorflow=true</td>
       <td>Compile the library with NCSDK backend support</td>
       <td>Compile the library with Tensorflow backend support</td>
     </tr>
     </tr>
     <tr>
     <tr>
       <td>--enable-tensorflow</td>
       <td>-Denable-tflite=true</td>
       <td>Compile the library with TensorFlow backend support</td>
       <td>Compile the library with TensorFlow Lite backend support</td>
     </tr>
     </tr>
     <tr>
     <tr>
       <td>--enable-tflite</td>
       <td>-Denable-tensorrt=true</td>
       <td>Compile the library with TensorFlow-Lite backend support</td>
      <td>Compile the library with TensorRT backend support</td>
    </tr>
    <tr>
      <td>-Denable-onnxrt=true</td>
      <td>Compile the library with ONNXRT backend support</td>
    </tr>
    <tr>
      <td>-Denable-onnxrt-acl=true</td>
       <td>Compile the library with ONNXRT backend with Arm Compute Library (ACL) support</td>
    </tr>
    <tr>
      <td>-Denable-onnxrt-openvino=true</td>
      <td>Compile the library with ONNXRT backend with OpenVINO support</td>
     </tr>
     </tr>
     <caption>Table 1. R2Inference configuration options</caption>
     <caption>Table 2. R2Inference configuration options (Meson)</caption>
   </table>
   </table>
   </center>
   </center>
</html>
</html>


<!--------
<pre style="background-color:yellow">
The EdgeTPU backend has as a dependency the Tensorflow-lite backend, hence you need to enable it.
</pre>
---->
<br>
<br>
{{Ambox
{{Ambox
|type=notice
|type=notice
|small=left
|small=left
|issue='''The EdgeTPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it.'''
|issue='''In case of use the ONNXRT backend (or any of the [[R2Inference/Supported_backends/ONNXRT#Supported_execution_providers|execution providers]]) it is necessary add the following flags to the build configuration:'''
|style=width:unset;
|style=width:unset;
}}
}}
<syntaxhighlight lang='bash'>
# NOTE:
# These exports are only needed if you are building the ONNXRT backend.
# They are NOT necessary if using any of the other backend.
export ONNXRUNTIMEPATH=/PATH/ONNXRUNTIME/SRC/include/onnxruntime/
export CPPFLAGS="-I${ONNXRUNTIMEPATH}"
</syntaxhighlight>


<!--------
<!--------
<pre style="background-color:yellow">
<pre style="background-color:yellow">
In case of use the TensorFlow-Lite backend it is necessary add the following flags to the configure:
The Edge TPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it.
</pre>
</pre>
---->
---->
Line 89: Line 114:
|type=notice
|type=notice
|small=left
|small=left
|issue='''In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:.'''
|issue='''The Edge TPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it. Also is needed to add the following flags:'''
|style=width:unset;
|style=width:unset;
}}
}}
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
# NOTE:
# NOTE:
# These exports are only needed if you are using the TFLite backend.
# These exports are only needed if you are using the Edge TPU and TFLite backends.
# They are NOT necessary if using any of the other backend.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
export TENSORFLOW_PATH='<path-to-tensorflow>'
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include -L${TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/linux_aarch64/lib"
</syntaxhighlight>
 
<syntaxhighlight lang='bash'>
git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE
make
make check
sudo make install
</syntaxhighlight>
</syntaxhighlight>
==== Meson  ====
These instructions have been tested on:
* x86
To build and install r2inference you can run the following commands:
<html>
  <center>
  <table class='wikitable'>
    <tr>
      <th>Configure Option</th>
      <th>Description</th>
    </tr>
    <tr>
      <td>-Denable-edgetpu=true</td>
      <td>Compile the library with EdgeTPU backend support</td>
    </tr>
    <tr>
      <td>-Denable-tensorflow=true</td>
      <td>Compile the library with Tensorflow backend support</td>
    </tr>
    <tr>
      <td>-Denable-tflite=true</td>
      <td>Compile the library with TensorFlow Lite backend support</td>
    </tr>
    <tr>
      <td>-Denable-tensorrt=true</td>
      <td>Compile the library with TensorRT backend support</td>
    </tr>
    <caption>Table 2. R2Inference configuration options (Meson)</caption>
  </table>
  </center>
</html>
<!--------
<pre style="background-color:yellow">
The EdgeTPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it.
</pre>
---->
<br>
{{Ambox
|type=notice
|small=left
|issue='''The EdgeTPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it.'''
|style=width:unset;
}}


<!--------
<!--------
<pre style="background-color:yellow">
<pre style="background-color:yellow">
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:
In case of use the Tensorflow-Lite backend it is necessary to add the following flags to the configure:
</pre>
</pre>
---->
---->
Line 181: Line 149:
cd r2inference
cd r2inference
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
ninja -C build # Compile project
ninja -C build # Compile the project
ninja -C build test # Run tests
ninja -C build test # Run tests
sudo ninja -C build install # Install the library
sudo ninja -C build install # Install the library
</syntaxhighlight>
</syntaxhighlight>
'''Note''': If you are building R2Inference in the Coral Dev Kit consider using <code>ninja -C build -j 1</code> instead to avoid the compilation getting killed due to memory.


=== Yocto ===
=== Yocto ===
Line 209: Line 179:
</syntaxhighlight>
</syntaxhighlight>


Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
Finally, build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.


==Verify==
==Verify==
Line 264: Line 234:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
</syntaxhighlight>
</syntaxhighlight>
== Known issues ==
* If GstInference and R2Inference were built on Ubuntu 16.04, and the backend for TensorFlow and TensorFlow Lite was enabled; the building process will potentially have issues or present segmentation faults while using one of these backends.


<noinclude>
<noinclude>
{{R2Inference/Foot|Getting started/Getting the code|Supported backends}}
{{R2Inference/Foot|Getting started/Getting the code|Supported backends}}
</noinclude>
</noinclude>

Latest revision as of 18:22, 7 March 2023




Previous: Getting started/Getting the code Index Next: Supported backends




R2Inference dependencies

R2Inference has the following dependencies:

  • pkg-config
  • cpputest
  • doxygen

Many backends also have these common dependencies:

  • git
  • curl
  • unzip

Also, R2Inference makes use of the Meson build system.

In Debian based systems, you may install the dependencies with the following command:

sudo apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build pkg-config libcpputest-dev doxygen git curl unzip

Then, use pip3 to install the latest version of Meson directly from its repository.

sudo -H pip3 install git+https://github.com/mesonbuild/meson.git

You need to install the API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:

Installing R2Inference library

Linux

These instructions have been tested on:

  • x86
  • ARM64

To build and install r2inference you can run the following commands:

Configure Option Description
-Denable-coral=true Compile the library with Coral Edge TPU backend support
-Denable-tensorflow=true Compile the library with Tensorflow backend support
-Denable-tflite=true Compile the library with TensorFlow Lite backend support
-Denable-tensorrt=true Compile the library with TensorRT backend support
-Denable-onnxrt=true Compile the library with ONNXRT backend support
-Denable-onnxrt-acl=true Compile the library with ONNXRT backend with Arm Compute Library (ACL) support
-Denable-onnxrt-openvino=true Compile the library with ONNXRT backend with OpenVINO support
Table 2. R2Inference configuration options (Meson)


# NOTE:
# These exports are only needed if you are building the ONNXRT backend.
# They are NOT necessary if using any of the other backend.
export ONNXRUNTIMEPATH=/PATH/ONNXRUNTIME/SRC/include/onnxruntime/
export CPPFLAGS="-I${ONNXRUNTIMEPATH}"


# NOTE:
# These exports are only needed if you are using the Edge TPU and TFLite backends.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH='<path-to-tensorflow>'
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include -L${TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/linux_aarch64/lib"


# NOTE:
# These exports are only needed if you are using the TFLite backend.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
ninja -C build # Compile the project
ninja -C build test # Run tests
sudo ninja -C build install # Install the library

Note: If you are building R2Inference in the Coral Dev Kit consider using ninja -C build -j 1 instead to avoid the compilation getting killed due to memory.

Yocto

R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.

First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.

i.MX8 Yocto guide here.

In your Yocto sources folder, run the following command

git clone https://github.com/RidgeRun/meta-ridgerun.git

Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.

  ${BSPDIR}/sources/meta-ridgerun \

Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.

  IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"

Finally, build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.

Verify

You can verify the library with a simple application:

r2i_verify.cc

#include <iostream>
#include <r2i/r2i.h>

void PrintFramework (r2i::FrameworkMeta &meta) {
  std::cout << "Name        : " << meta.name << std::endl;
  std::cout << "Description : " << meta.description << std::endl;
  std::cout << "Version     : " << meta.version << std::endl;
  std::cout << "---" << std::endl;
}

int main (int argc, char *argv[]) {
  r2i::RuntimeError error;

  std::cout << "Backends supported by your system:" << std::endl;
  std::cout << "==================================" << std::endl;

  for (auto &meta : r2i::IFrameworkFactory::List (error)) {
    PrintFramework (meta);
  }

  return 0;
}

You may build this example by running:

g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify

You can also check our examples page to get the examples included with the library running.

Troubleshooting

configure: *** checking feature: tensorflow ***
checking for TF_Version in -ltensorflow... no
configure: error: Couldn't find tensorflow
[AUTOGEN][11:46:38][ERROR]	Failed to run configure

The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/

Known issues

  • If GstInference and R2Inference were built on Ubuntu 16.04, and the backend for TensorFlow and TensorFlow Lite was enabled; the building process will potentially have issues or present segmentation faults while using one of these backends.




Previous: Getting started/Getting the code Index Next: Supported backends