R2Inference: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
No edit summary
Line 1: Line 1:
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" keywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework,inference,intuitive API, Xavier inference, Jetson inference, nvidia tx1 inference, nvida tx2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" description="R2Inference is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" keywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework,inference,intuitive API, Xavier inference, Jetson inference, nvidia tx1 inference, nvida tx2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" description="R2Inference is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>
{{Ambox
|type=notice
|issue=Make sure you also check R2Inference's companion project: [[GstInference]]
}}


{{DISPLAYTITLE:R2Inference|noerror}}
{{DISPLAYTITLE:R2Inference|noerror}}

Revision as of 17:35, 5 February 2019



R2Inference

A framework-independent inference library.

R2Inference is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. This is specially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.

Get started with R2Inference by clicking the button below!

For more advanced and custom support, please contact Ridgerun support at support@ridgerun.com.




Index Next: Introduction