Coral from Google/GstInference/Introduction: Difference between revisions

From RidgeRun Developer Wiki
No edit summary
mNo edit summary
Line 15: Line 15:
* [[GstInference/Getting_started/Building_the_plugin|Building GstInference]]
* [[GstInference/Getting_started/Building_the_plugin|Building GstInference]]


After the installation is completed, you can generate GStreamer pipelines for the Coral using different GstInference elements in this [[GstInference/Example_pipelines_with_hierarchical_metadata| tool]].
After the installation is completed, you can generate GStreamer pipelines for Coral using different GstInference elements in this [[GstInference/Example_pipelines_with_hierarchical_metadata| tool]].





Revision as of 17:38, 23 February 2021




Previous: GstInference Index Next: GstInference/Why_use_GstInference?





Deep Learning (DL) has revolutionized classic computer vision techniques to enable even more intelligent and autonomous systems. To ease the software development burden for complex embedded visual Deep Learning applications, a multimedia framework, such as GStreamer, is utilized to simplify the task. The Open Source GStreamer audio video streaming framework is a good choice as it separates the complexities of handling streaming video from the inference models processing the individual frames. GstInference is an open-source GStreamer project sponsored by RidgeRun that allows easy integration of deep learning networks into your video streaming application.

Example of use of GstInference in the Coral. The model used was the TinyYolo V2. Video took from: https://pixabay.com/videos/road-autobahn-motorway-highway-11018/

GstInference and R2Inference are supported by Coral. To install it, follow these guides:

After the installation is completed, you can generate GStreamer pipelines for Coral using different GstInference elements in this tool.


Previous: GstInference Index Next: GstInference/Why_use_GstInference?