GstInference/Introduction: Difference between revisions
No edit summary |
No edit summary |
||
Line 14: | Line 14: | ||
[[File:gst-inference-sw-stack.png|thumb|center|GstInference Software Stack]] | [[File:gst-inference-sw-stack.png|thumb|center|GstInference Software Stack]] | ||
;HW Modules | |||
:A deep learning model can be inferred in different hardware units. In the most general case, even though performance won't be great, the general purpose CPU may be used. This is ideal for quick prototyping without requiring specialized hardware. | |||
:One of the most popular processors to infer models is the [https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units GPGPU]. Platforms such as [https://en.wikipedia.org/wiki/CUDA CUDA] or [https://en.wikipedia.org/wiki/OpenCL OpenCL] have enabled the concurrent processing power of these units to execute in a very efficient way deep learning models. | |||
<noinclude> | <noinclude> | ||
{{GstInference/Foot||Getting started}} | {{GstInference/Foot||Getting started}} | ||
</noinclude> | </noinclude> |
Revision as of 20:31, 10 December 2018
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
Deep Learning has revolutionized classic computer vision techniques to enable even more intelligent and autonomous systems. Multimedia frameworks, such as GStreamer, are a basic complement of automatic recognition and classification systems. GstInference is an ongoing open-source project from Ridgerun Engineering that allows easy integration of deep learning networks into your existing pipeline.
General Concepts
Software Stack
The following diagram shows how GstInference plays along other software and hardware modules.
- HW Modules
- A deep learning model can be inferred in different hardware units. In the most general case, even though performance won't be great, the general purpose CPU may be used. This is ideal for quick prototyping without requiring specialized hardware.
- One of the most popular processors to infer models is the GPGPU. Platforms such as CUDA or OpenCL have enabled the concurrent processing power of these units to execute in a very efficient way deep learning models.