R2Inference/Getting started/Building the library: Difference between revisions

From RidgeRun Developer Wiki
Line 28: Line 28:


==Install library==
==Install library==
 
=== Linux ===
To build and install r2inference you can run the following commands:
To build and install r2inference you can run the following commands:


Line 60: Line 60:
sudo make install
sudo make install
</syntaxhighlight>
</syntaxhighlight>
=== Yocto ===
R2Inference is available at Ridgerun's meta-layer, please check our recipes [https://github.com/RidgeRun/meta-ridgerun here].
Actually, only i.MX8 platforms are supported with Yocto.
First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.
[https://developer.ridgerun.com/wiki/index.php?title=IMX8/iMX8MEVK/Yocto/Building_Yocto i.MX8 Yocto] guide here.
In your Yocto sources folder, run the following command
<syntaxhighlight lang='bash'>
git clone https://github.com/RidgeRun/meta-ridgerun.git
</syntaxhighlight>
Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.
<syntaxhighlight lang='bash'>
  ${BSPDIR}/sources/meta-ridgerun \
</syntaxhighlight>
Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.
<syntaxhighlight lang='bash'>
  IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"
</syntaxhighlight>
Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.


==Verify==
==Verify==

Revision as of 02:22, 1 July 2019




Previous: Getting started/Getting the code Index Next: Supported backends




Dependencies

R2Inference has the following dependencies:

  • autoreconf
  • automake
  • pkg-config
  • libtool
  • gtk-doc-tools

In Ubuntu 16.04 based systems, you may install the dependencies with the following command:

sudo apt-get install -y autoconf automake pkg-config libtool gtk-doc-tools libcpputest-dev doxygen

You need to install the C API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:

Install library

Linux

To build and install r2inference you can run the following commands:

Configure Option Description
--enable-ncsdk Compile the library with NCSDK backend support
--enable-tensorflow Compile the library with TensorFlow backend support
Table 1. R2Inference configuration options


git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE
make
make check
sudo make install

Yocto

R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.

First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.

i.MX8 Yocto guide here.

In your Yocto sources folder, run the following command

git clone https://github.com/RidgeRun/meta-ridgerun.git

Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.

  ${BSPDIR}/sources/meta-ridgerun \

Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.

  IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"

Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.

Verify

You can verify the library with a simple application:

r2i_verify.cc

#include <iostream>
#include <r2i/r2i.h>

void PrintFramework (r2i::FrameworkMeta &meta) {
  std::cout << "Name        : " << meta.name << std::endl;
  std::cout << "Description : " << meta.description << std::endl;
  std::cout << "Version     : " << meta.version << std::endl;
  std::cout << "---" << std::endl;
}

int main (int argc, char *argv[]) {
  r2i::RuntimeError error;

  std::cout << "Backends supported by your system:" << std::endl;
  std::cout << "==================================" << std::endl;

  for (auto &meta : r2i::IFrameworkFactory::List (error)) {
    PrintFramework (meta);
  }

  return 0;
}

You may build this example by running:

g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify

You can also check our examples page to get the examples included with the library running.

Troubleshooting

configure: *** checking feature: tensorflow ***
checking for TF_Version in -ltensorflow... no
configure: error: Couldn't find tensorflow
[AUTOGEN][11:46:38][ERROR]	Failed to run configure

The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/




Previous: Getting started/Getting the code Index Next: Supported backends