GstInference/Supported backends/NCSDK: Difference between revisions
mNo edit summary |
mNo edit summary |
||
(6 intermediate revisions by 2 users not shown) | |||
Line 7: | Line 7: | ||
--> | --> | ||
The NCSDK Intel® Movidius™ Neural Compute SDK (Intel® Movidius™ NCSDK) enables deployment of deep neural networks on compatible devices such as the Intel® Movidius™ Neural Compute Stick. The NCSDK includes a set of software tools to compile, profile, and validate DNNs (Deep Neural Networks) as well as APIs on C/C++ and Python for application development. | The NCSDK Intel® Movidius™ Neural Compute SDK (Intel® Movidius™ NCSDK) enables the deployment of deep neural networks on compatible devices such as the Intel® Movidius™ Neural Compute Stick. The NCSDK includes a set of software tools to compile, profile, and validate DNNs (Deep Neural Networks) as well as APIs on C/C++ and Python for application development. | ||
To use the ncsdk on Gst-Inference be sure to run the R2Inference configure with the flag <code> --enable-ncsdk </code> and use the property <code> backend=ncsdk </code> on the Gst-Inference plugins. | To use the ncsdk on Gst-Inference be sure to run the R2Inference configure with the flag <code> --enable-ncsdk </code> and use the property <code> backend=ncsdk </code> on the Gst-Inference plugins. | ||
Line 13: | Line 13: | ||
==Installation== | ==Installation== | ||
You can install the NCSDK on a system running Linux directly, downloading a Docker container, on a virtual machine or using a Python virtual environment. All the possible installation paths are documented on the [https://movidius.github.io/ncsdk/install.html official installation guide]. | You can install the NCSDK on a system running Linux directly, downloading a Docker container, on a virtual machine, or using a Python virtual environment. All the possible installation paths are documented on the [https://movidius.github.io/ncsdk/install.html Intel® Movidius™ NCSDK official installation guide]. | ||
We also provide an installation guide with troubleshooting on the [[Intel_Movidius_NCSDK_Installation | Intel Movidius Installation wiki page]] | We also provide an installation guide with troubleshooting on the [[Intel_Movidius_NCSDK_Installation | Intel Movidius Installation RidgeRun wiki page]] | ||
Note: It is recommended to take the docker container route on the NCSDK installation. Other routes may affect your python environment because it sometimes uninstalls and reinstalls python and some common plugins such as | Note: It is recommended to take the docker container route on the NCSDK installation. Other routes may affect your python environment because it sometimes uninstalls and reinstalls python and some common plugins such as NumPy or TensorFlow. Docker installation is actually straightforward, and it doesn't affect your environment at all. [https://movidius.github.io/ncsdk/docker.html Installation and Configuration with Docker] has the steps to jump start with docker. | ||
== Enabling the backend == | == Enabling the backend == | ||
To enable NCSDK as a backend for GstInference you need to install R2Inference with NCSDK support. To do this, use the option --enable-ncsdk during R2Inference configure | To enable NCSDK as a backend for GstInference you need to install R2Inference with NCSDK support. To do this, use the option --enable-ncsdk during R2Inference configure following this [[R2Inference/Getting_started/Building_the_library|wiki]]. | ||
==Generating a graph== | ==Generating a graph== | ||
GstInference | GstInference NCSDK backend uses the same graphs as the NCSDK API. Those graphs are specially compiled to run inference on a Neural Compute Stick(NCS). The NCSDK provides a tool (mvNCCompile) to generate NCS graphs from either a TensorFlow frozen model or a Caffe model and weights. For examples on how to generate a graph please check the [[R2Inference/Supported_backends/NCSDK#Generating_a_model_for_R2I | Generating a model for R2I]] section on the R2Inference wiki. | ||
==Properties== | ==Properties== | ||
[https://movidius.github.io/ncsdk/ncapi/ncapi2/c_api/readme.html Intel® Movidius™ Neural Compute SDK C API v2] and [https://movidius.github.io/ncsdk/ncapi/ncapi2/py_api/readme.html Intel® Movidius™ Neural Compute SDK Python API v2] has the full documentation of the C API and Python API. Gst-Inference uses only the C API and R2Inference takes care of devices, graphs, models, and fifos. Because of this, we will only take a look at the options that you can change when using the C API through R2Inference. | |||
The following syntax is used to change backend options on Gst-Inference plugins: | The following syntax is used to change backend options on Gst-Inference plugins: | ||
Line 53: | Line 49: | ||
The <code>backend::log-level=1</code> section of the pipeline sets the <code>NC_RW_LOG_LEVEL</code> option of the NCSDK C API to <code>1</code>. | The <code>backend::log-level=1</code> section of the pipeline sets the <code>NC_RW_LOG_LEVEL</code> option of the NCSDK C API to <code>1</code>. | ||
To learn more about the NCSDK C API option, please check the [[R2Inference/Supported_backends/NCSDK#API| NCSDK API wiki section]] on the R2Inference | To learn more about the NCSDK C API option, please check the [[R2Inference/Supported_backends/NCSDK#API| NCSDK API wiki section]] on the R2Inference sub wiki. | ||
==Tools== | ==Tools== | ||
The NCSDK installation | The NCSDK installation includes some useful tools to analyze, optimize, and compile models. We will mention these tools here, but if you want some examples and a more complete description please check the [[R2Inference/Supported_backends/NCSDK| NCSDK wiki page]] on the R2Inference sub wiki. | ||
* '''mvNCCheck''': Checks the validity of a Caffe or TensorFlow model on a neural compute device. The check is done by running | * '''mvNCCheck''': Checks the validity of a Caffe or TensorFlow model on a neural compute device. The check is done by running inference on both the device and in software and then comparing the results to determine if the network passes or fails. | ||
* '''mvNCCompile''': Compiles a network and weights files from Caffe or TensorFlow models into a graph file that is compatible with the NCAPI. | * '''mvNCCompile''': Compiles a network and weights files from Caffe or TensorFlow models into a graph file that is compatible with the NCAPI. | ||
* '''mvNCProfile''': Compiles a network, runs it on a connected neural compute device, and outputs profiling info on the terminal and on an HTML file. The profiling data contains layer performance and execution time of the model. The | * '''mvNCProfile''': Compiles a network, runs it on a connected neural compute device, and outputs profiling info on the terminal and on an HTML file. The profiling data contains layer performance and execution time of the model. The HTML version of the report also contains a graphical representation of the model. | ||
<noinclude> | <noinclude> | ||
{{GstInference/Foot|Supported backends|Supported backends/TensorFlow}} | {{GstInference/Foot|Supported backends|Supported backends/TensorFlow}} | ||
</noinclude> | </noinclude> |
Latest revision as of 18:42, 7 December 2020
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
The NCSDK Intel® Movidius™ Neural Compute SDK (Intel® Movidius™ NCSDK) enables the deployment of deep neural networks on compatible devices such as the Intel® Movidius™ Neural Compute Stick. The NCSDK includes a set of software tools to compile, profile, and validate DNNs (Deep Neural Networks) as well as APIs on C/C++ and Python for application development.
To use the ncsdk on Gst-Inference be sure to run the R2Inference configure with the flag --enable-ncsdk
and use the property backend=ncsdk
on the Gst-Inference plugins.
Installation
You can install the NCSDK on a system running Linux directly, downloading a Docker container, on a virtual machine, or using a Python virtual environment. All the possible installation paths are documented on the Intel® Movidius™ NCSDK official installation guide.
We also provide an installation guide with troubleshooting on the Intel Movidius Installation RidgeRun wiki page
Note: It is recommended to take the docker container route on the NCSDK installation. Other routes may affect your python environment because it sometimes uninstalls and reinstalls python and some common plugins such as NumPy or TensorFlow. Docker installation is actually straightforward, and it doesn't affect your environment at all. Installation and Configuration with Docker has the steps to jump start with docker.
Enabling the backend
To enable NCSDK as a backend for GstInference you need to install R2Inference with NCSDK support. To do this, use the option --enable-ncsdk during R2Inference configure following this wiki.
Generating a graph
GstInference NCSDK backend uses the same graphs as the NCSDK API. Those graphs are specially compiled to run inference on a Neural Compute Stick(NCS). The NCSDK provides a tool (mvNCCompile) to generate NCS graphs from either a TensorFlow frozen model or a Caffe model and weights. For examples on how to generate a graph please check the Generating a model for R2I section on the R2Inference wiki.
Properties
Intel® Movidius™ Neural Compute SDK C API v2 and Intel® Movidius™ Neural Compute SDK Python API v2 has the full documentation of the C API and Python API. Gst-Inference uses only the C API and R2Inference takes care of devices, graphs, models, and fifos. Because of this, we will only take a look at the options that you can change when using the C API through R2Inference.
The following syntax is used to change backend options on Gst-Inference plugins:
backend::<property>
For example to change the NCSDK API log level of the googlenet plugin you need to run the pipeline like this:
gst-launch-1.0 \ googlenet name=net model-location=/root/r2inference/examples/r2i/ncsdk/graph_googlenet backend=ncsdk backend::log-level=1 \ videotestsrc ! tee name=t \ t. ! queue ! videoconvert ! videoscale ! net.sink_model \ t. ! queue ! net.sink_bypass \ net.src_bypass ! fakesink
The backend::log-level=1
section of the pipeline sets the NC_RW_LOG_LEVEL
option of the NCSDK C API to 1
.
To learn more about the NCSDK C API option, please check the NCSDK API wiki section on the R2Inference sub wiki.
Tools
The NCSDK installation includes some useful tools to analyze, optimize, and compile models. We will mention these tools here, but if you want some examples and a more complete description please check the NCSDK wiki page on the R2Inference sub wiki.
- mvNCCheck: Checks the validity of a Caffe or TensorFlow model on a neural compute device. The check is done by running inference on both the device and in software and then comparing the results to determine if the network passes or fails.
- mvNCCompile: Compiles a network and weights files from Caffe or TensorFlow models into a graph file that is compatible with the NCAPI.
- mvNCProfile: Compiles a network, runs it on a connected neural compute device, and outputs profiling info on the terminal and on an HTML file. The profiling data contains layer performance and execution time of the model. The HTML version of the report also contains a graphical representation of the model.