R2Inference: Difference between revisions
mNo edit summary |
No edit summary |
||
Line 30: | Line 30: | ||
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. | As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. | ||
This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance. | This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance. | ||
R2Inference is a Coral compatible project. | |||
[[File:Works with coral v2.png|800px|frameless|center]] | |||
Get started with R2Inference by clicking the button below! | Get started with R2Inference by clicking the button below! |
Revision as of 17:24, 2 March 2021
Make sure you also check R2Inference's companion project: GstInference |
R2Inference A framework-independent inference library. |
| ||||||||||||||
R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance. R2Inference is a Coral compatible project. Get started with R2Inference by clicking the button below! | |||||||||||||||
|