R2Inference/Getting started/Building the library: Difference between revisions
(→Linux) |
|||
Line 51: | Line 51: | ||
</html> | </html> | ||
If you are going to use R2Inference in combination with Gst-Inference, please add also the following options: | |||
<html> | |||
<center> | |||
<table class='wikitable'> | |||
<tr> | |||
<th>System</th> | |||
<th>Configure Option</th> | |||
</tr> | |||
<tr> | |||
<td>Ubuntu 64 bits</td> | |||
<td>--prefix /usr/ --libdir /usr/lib/x86_64-linux-gnu/</td> | |||
</tr> | |||
<tr> | |||
<td>RidgeRun's Embedded FS</td> | |||
<td>--prefix /usr/</td> | |||
</tr> | |||
<tr> | |||
<td>Jetson NANO TX2 Xavier</td> | |||
<td>--prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/</td> | |||
</tr> | |||
<tr> | |||
<td>i.MX8</td> | |||
<td>--prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/</td> | |||
</tr> | |||
<caption>Table 2. Platform configuration options</caption> | |||
</table> | |||
</center> | |||
</html> | |||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> |
Revision as of 13:48, 16 July 2019
Make sure you also check R2Inference's companion project: GstInference |
R2Inference |
---|
Introduction |
Getting started |
Supported backends |
Examples |
Model Zoo |
Contact Us |
|
Dependencies
R2Inference has the following dependencies:
- autoreconf
- automake
- pkg-config
- libtool
- gtk-doc-tools
In Ubuntu 16.04 based systems, you may install the dependencies with the following command:
sudo apt-get install -y autoconf automake pkg-config libtool gtk-doc-tools libcpputest-dev doxygen
You need to install the C API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
Install library
Linux
To build and install r2inference you can run the following commands:
Configure Option | Description |
---|---|
--enable-ncsdk | Compile the library with NCSDK backend support |
--enable-tensorflow | Compile the library with TensorFlow backend support |
If you are going to use R2Inference in combination with Gst-Inference, please add also the following options:
System | Configure Option |
---|---|
Ubuntu 64 bits | --prefix /usr/ --libdir /usr/lib/x86_64-linux-gnu/ |
RidgeRun's Embedded FS | --prefix /usr/ |
Jetson NANO TX2 Xavier | --prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/ |
i.MX8 | --prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/ |
git clone https://github.com/RidgeRun/r2inference.git cd r2inference ./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE make make check sudo make install
Yocto
R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.
First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.
i.MX8 Yocto guide here.
In your Yocto sources folder, run the following command
git clone https://github.com/RidgeRun/meta-ridgerun.git
Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.
${BSPDIR}/sources/meta-ridgerun \
Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.
IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"
Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
Verify
You can verify the library with a simple application:
r2i_verify.cc
#include <iostream> #include <r2i/r2i.h> void PrintFramework (r2i::FrameworkMeta &meta) { std::cout << "Name : " << meta.name << std::endl; std::cout << "Description : " << meta.description << std::endl; std::cout << "Version : " << meta.version << std::endl; std::cout << "---" << std::endl; } int main (int argc, char *argv[]) { r2i::RuntimeError error; std::cout << "Backends supported by your system:" << std::endl; std::cout << "==================================" << std::endl; for (auto &meta : r2i::IFrameworkFactory::List (error)) { PrintFramework (meta); } return 0; }
You may build this example by running:
g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify
You can also check our examples page to get the examples included with the library running.
Troubleshooting
- After following TensorFlow Installation Instructions and you got below installation issue:
configure: *** checking feature: tensorflow *** checking for TF_Version in -ltensorflow... no configure: error: Couldn't find tensorflow [AUTOGEN][11:46:38][ERROR] Failed to run configure
The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/