R2Inference: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" keywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework, inference, intuitive API, Xavier inference, Jetson inference, NVIDIA tx1 inference, NVIDIA TX2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" description="R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" metakeywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework, inference, intuitive API, Xavier inference, Jetson inference, NVIDIA tx1 inference, NVIDIA TX2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" metadescription="R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo>


<noinclude>
<noinclude>
Line 30: Line 30:
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example.
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example.
This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.
This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.
R2Inference is a Coral compatible project.
[[File:Works with coral v2.png|800px|frameless|center]]


Get started with R2Inference by clicking the button below!
Get started with R2Inference by clicking the button below!

Latest revision as of 18:13, 7 March 2023




Index Next: Introduction




R2Inference

A framework-independent inference library.

R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance.

R2Inference is a Coral compatible project.

Get started with R2Inference by clicking the button below!


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us
RidgeRun.ai: Artificial Intelligence | Generative AI | Machine Learning

Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering information is available at RidgeRun Engineering Services, RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page.




Index Next: Introduction