Jump to content

DeepStream Reference Designs/Project Architecture/High Level Design: Difference between revisions

Line 57: Line 57:
=== Custom Deep Learning Models ===
=== Custom Deep Learning Models ===


It should be mentioned that the inference logic is also encapsulated in a separate, independent module, called engine, with a well-established interface. This module bases its operation on the Deepstream SDK. So, for instance, in a parking lot system, you could use a cascade of three different networks: A car detector, a License plate detector, and an OCR (optical character recognition) system. This configuration will vary from application to application. A shoplifting detection will probably implement a person detector along with a behavior analysis model. A speed limit enforcer will likely use a car detector and a tracker. A neuromarketing-powered billboard will use a face detector and a gaze tracker. Having the inference logic in an independent module allows you to highly customize your deep learning pipeline without modifying the rest of the architecture.
It should be mentioned that the inference logic is also encapsulated in a separate, independent module, called engine, with a well-established interface. This module bases its operation on the [https://developer.nvidia.com/deepstream-sdk DeepStream SDK], and allows, within its configurations, to use different inference models according to the application being developed. So, for instance, in a parking lot system, you could use a cascade of three different networks:  
 
* A car detector
* A License plate detector
* An OCR (optical character recognition) system  
 
This configuration will vary from application to application. A shoplifting detection will probably implement a person detector along with a behavior analysis model. A speed limit enforcer will likely use a car detector and a tracker. A neuromarketing-powered billboard will use a face detector and a gaze tracker. As you can see, having the inference logic in an independent module allows you to highly customize your deep learning pipeline without modifying the rest of the architecture.


=== Custom Inference Listener ===
=== Custom Inference Listener ===
583

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.