Jump to content

R2Inference: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 1: Line 1:
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" keywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework,inference,intuitive API, Xavier inference, Jetson inference, nvidia tx1 inference, nvida tx2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" description="R2Inference is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" keywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework, inference, intuitive API, Xavier inference, Jetson inference, NVIDIA tx1 inference, NVIDIA TX2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" description="R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>


<noinclude>
<noinclude>
Line 27: Line 27:
| width="100%" valign="top" |
| width="100%" valign="top" |


[https://github.com/RidgeRun/r2inference R2Inference] is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks.
[https://github.com/RidgeRun/r2inference R2Inference] is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks.
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example.
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example.
This is specially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.
This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.


Get started with R2Inference by clicking the button below!
Get started with R2Inference by clicking the button below!
Cookies help us deliver our services. By using our services, you agree to our use of cookies.