R2Inference: Difference between revisions
No edit summary |
mNo edit summary |
||
(12 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
<seo title="R2Inference | C/C++ abstraction layer for machine learning frameworks | RidgeRun" titlemode="replace" metakeywords="deep learning inference framework, deep learning inference, gstreamer deep learning inference, deep learning, inference framework, inference, intuitive API, Xavier inference, Jetson inference, NVIDIA tx1 inference, NVIDIA TX2 inference, C/C++ abstraction layer for machine learning frameworks, abstraction layer for machine learning frameworks, framework-independent inference library" metadescription="R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks."></seo> | |||
<noinclude> | |||
{{R2Inference/Foot||Introduction}} | |||
</noinclude> | |||
{{Ambox | |||
|type=notice | |||
|issue=Make sure you also check R2Inference's companion project: [[GstInference]] | |||
}} | |||
<!----- | |||
{{DISPLAYTITLE:R2Inference|noerror}} | {{DISPLAYTITLE:R2Inference|noerror}} | ||
----> | |||
{| style="border-style: solid; border-width: 1px; margin: 10px; right-margin: 200px" | {| style="border-style: solid; border-width: 1px; margin: 10px; right-margin: 200px" | ||
Line 14: | Line 27: | ||
| width="100%" valign="top" | | | width="100%" valign="top" | | ||
[https://github.com/RidgeRun/r2inference R2Inference] is an open source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. | [https://github.com/RidgeRun/r2inference R2Inference] is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. | ||
As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. | As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. | ||
This is | This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance. | ||
R2Inference is a Coral compatible project. | |||
[[File:Works with coral v2.png|800px|frameless|center]] | |||
Get started with R2Inference by clicking the button below! | Get started with R2Inference by clicking the button below! | ||
[[File:xavier_get_started_here.png|400px|frameless|center|link=R2Inference/Introduction]] | [[File:xavier_get_started_here.png|400px|frameless|center|link=R2Inference/Introduction]] | ||
<center> | |||
{{Sponsor Button}} | |||
</center> | |||
|- | |||
| colspan="2"| | |||
{{ContactUs}} | |||
|} | |} | ||
Latest revision as of 18:13, 7 March 2023
Make sure you also check R2Inference's companion project: GstInference |
R2Inference A framework-independent inference library. |
| ||||||||||||||
R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. As such, a single C/C++ application may work with a Caffe or TensorFlow model, for example. This is especially useful for hybrid solutions, where multiple models need to be inferred. R2Inference may be able to execute one model on the DLA and another on the CPU, for instance. R2Inference is a Coral compatible project. Get started with R2Inference by clicking the button below! | |||||||||||||||
|