GstInference/Benchmarks: Difference between revisions

From RidgeRun Developer Wiki
No edit summary
No edit summary
Line 949: Line 949:
</style>
</style>


<div id="Buttons_Model_Onnxrt_Cpu" style="margin: auto; width: 1300px; height: auto;">
<div id="Buttons_Model" style="margin: auto; width: 1300px; height: auto;">
<button class="button" id="show_inceptionv1_onnxrt_cpu">Show InceptionV1 </button>
<button class="button" id="show_inceptionv1_onnxrt_cpu">Show InceptionV1 </button>
<button class="button" id="show_inceptionv2_onnxrt_cpu">Show InceptionV2 </button>
<button class="button" id="show_inceptionv2_onnxrt_cpu">Show InceptionV2 </button>
Line 959: Line 959:
<br><br>
<br><br>
<div id="chart_onnxrt_cpu" style="margin: auto; width: 800px; height: 500px;"></div>
<div id="chart_onnxrt_cpu" style="margin: auto; width: 800px; height: 500px;"></div>
<br><br>
<div id="Buttons_Backend" style="margin: auto; width: 600px; height: auto;">
<div id="Buttons_Backend" style="margin: auto; width: 600px; height: auto;">
<button class="button" id="show_onnxrt_cpu">Show ONNXRT </button>
<button class="button" id="show_onnxrt_cpu">Show ONNXRT </button>
</div>
</div>
<div id="chart_onnxrt1_cpu" style="margin: auto; width: 800px; height: 500px;"></div>
<div id="chart_onnxrt1_cpu" style="margin: auto; width: 800px; height: 500px;"></div>
<br><br>
<script>
</script>


</html>
</html>

Revision as of 21:54, 7 July 2020




Previous: Example Applications/DispTec Index Next: Model Zoo




GstInference Benchmarks

The following benchmarks were run with a source video (1920x1080@60). With the following base GStreamer pipeline, and environment variables:

$ VIDEO_FILE='video.mp4'
$ MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
$ INPUT_LAYER='input'
$ OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'

The environment variables were changed accordingly with the used model (Inception V1,V2,V3 or V4)

GST_DEBUG=inception1:1 gst-launch-1.0 filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER net.src_model ! perf ! fakesink -v

The Desktop PC had the following specifications:

  • Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
  • 8 GB RAM
  • Cedar [Radeon HD 5000/6000/7350/8350 Series]
  • Linux 4.15.0-54-generic x86_64 (Ubuntu 16.04)

The Jetson Xavier power modes used were 2 and 6 (more information: Supported Modes and Power Efficiency)

  • View current power mode:
$ sudo /usr/sbin/nvpmodel -q
  • Change current power mode:
sudo /usr/sbin/nvpmodel -m x

Where x is the power mode ID (e.g. 0, 1, 2, 3, 4, 5, 6).

Summary

Desktop PC CPU Library
Model Framerate CPU Usage
Inception V1 11.89 48
Inception V2 10.33 65
Inception V3 5.41 90
Inception V4 3.81 94
Jetson Xavier (15W) CPU Library GPU Library
Model Framerate CPU Usage Framerate CPU Usage
Inception V1 8.24 86 52.3 43
Inception V2 6.58 88 39.6 42
Inception V3 2.54 92 17.8 25
Inception V4 1.22 94 9.4 20
Jetson Xavier (30W) CPU Library GPU Library
Model Framerate CPU Usage Framerate CPU Usage
Inception V1 6.41 93 66.27 72
Inception V2 5.11 95 50.59 62
Inception V3 1.96 98 22.95 44
Inception V4 0.98 99 12.14 32

Framerate

thumb
thumb

CPU Usage

thumb
thumb

TensorFlow Lite Benchmarks

FPS measurement







CPU usage measurement







Test benchmark video

The following video was used to perform the benchmark tests.
To download the video press right click on the video and select 'Save video as' and save this in your computer.

Test benchmark video

ONNXRT Benchmarks

The Desktop PC had the following specifications:

  • Intel(R) Core(TM) Core i7-7700HQ CPU @ 2.80GHz
  • 12 GB RAM
  • Linux 4.15.0-106-generic x86_64 (Ubuntu 16.04)

The following was the GStreamer pipeline used to obtain the results:

model_array=(inceptionv1 inceptionv2 inceptionv3 inceptionv4 tinyyolov2 tinyyolov3)
model_upper_array=(InceptionV1 InceptionV2 InceptionV3 InceptionV4 TinyYoloV2 TinyYoloV3)

gst-launch-1.0 \
filesrc location=$VIDEO_PATH num-buffers=600 ! decodebin ! videoconvert ! \
perf print-arm-load=true name=inputperf ! tee name=t t. ! videoscale ! queue ! net.sink_model t. ! queue ! net.sink_bypass \
${model_array[i]} backend=onnxrt name=net \
model-location="${MODELS_PATH}${model_upper_array[i]}_${INTERNAL_PATH}/graph_${model_array[i]}${EXTENSION}" \
net.src_bypass ! perf print-arm-load=true name=outputperf ! videoconvert ! fakesink sync=false

FPS Measurements







CPU Load Measurements



Test benchmark video

The following video was used to perform the benchmark tests.
To download the video press right click on the video and select 'Save video as' and save this in your computer.

Test benchmark video


Previous: Example Applications/DispTec Index Next: Model Zoo