GstInference/Benchmarks: Difference between revisions

No edit summary
Line 1: Line 1:
<noinclude>
{{GstInference/Head|previous=Example Applications/Smart Lock|next=Model Zoo|keywords=GstInference gstreamer pipelines, Inference gstreamer pipelines, NCSDK Inference gstreamer pipelines using GoogLeNet, NCSDK Inference gstreamer pipelines using TinyYolo v2, NCSDK  Inference gstreamer pipelines using GoogLeNet x2  and TensorFlow backend}}
</noinclude>
<noinclude>
<noinclude>
{{GstInference/Head|previous=Example Applications/Smart Lock|next=Model Zoo|keywords=GstInference gstreamer pipelines, Inference gstreamer pipelines, NCSDK Inference gstreamer pipelines using GoogLeNet, NCSDK Inference gstreamer pipelines using TinyYolo v2, NCSDK  Inference gstreamer pipelines using GoogLeNet x2  and TensorFlow backend}}
{{GstInference/Head|previous=Example Applications/Smart Lock|next=Model Zoo|keywords=GstInference gstreamer pipelines, Inference gstreamer pipelines, NCSDK Inference gstreamer pipelines using GoogLeNet, NCSDK Inference gstreamer pipelines using TinyYolo v2, NCSDK  Inference gstreamer pipelines using GoogLeNet x2  and TensorFlow backend}}
Line 25: Line 29:


The Jetson Xavier power modes used were 2 and 6 (more information: [https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0OM0HA Supported Modes and Power Efficiency] )
The Jetson Xavier power modes used were 2 and 6 (more information: [https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0OM0HA Supported Modes and Power Efficiency] )
*View current power mode:
<source lang="bash">
$ sudo /usr/sbin/nvpmodel -q
</source>
*Change current power mode:
<source lang="bash">
sudo /usr/sbin/nvpmodel -m x
</source>
Where x is the power mode ID (e.g. 0, 1, 2, 3, 4, 5, 6).


== Summary ==  
== Summary ==  
583

edits