GstInference inferencedebug
Make sure you also check GstInference's companion project: R2Inference |
GstInference |
---|
Introduction |
Getting started |
Supported architectures |
InceptionV1 InceptionV3 YoloV2 AlexNet |
Supported backends |
Caffe |
Metadata and Signals |
Overlay Elements |
Utils Elements |
Legacy pipelines |
Example pipelines |
Example applications |
Benchmarks |
Model Zoo |
Project Status |
Contact Us |
|
Our GstInferenceMeta behaves in a hierarchical manner; in order to print the whole metadata at a certain point in the pipeline, inferencedebug was created. This way, you can check classes. labels and the enabled status by enabling the debug on your pipeline and locating the inference debug element at the desired location.
GstInference inferencedebug example
In this example, we located the inferencedebug element right after the inferencefilter to check which classes are being enabled.
GST_DEBUG=2,*inferencedebug*:6 gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! tee name=t t. ! videoscale ! queue ! net.sink_model t. ! queue ! net.sink_bypass tinyyolov2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_model ! inferencefilter filter-class=8 ! inferencedebug ! fakesink
- Output
0:00:03.255231109 11277 0x55f5ce5cfde0 DEBUG inferencedebug gstinferencedebug.c:131:gst_inference_debug_transform_ip:<inferencedebug0> transform_ip 0:00:03.255268289 11277 0x55f5ce5cfde0 DEBUG inferencedebug gstinferencedebug.c:120:gst_inference_debug_print_predictions:<inferencedebug0> Prediction Tree: { id : 169, enabled : False, bbox : { x : 0 y : 0 width : 416 height : 416 }, classes : [ ], predictions : [ { id : 170, enabled : False, bbox : { x : 101 y : 96 width : 274 height : 346 }, classes : [ { Id : 81 Class : 14 Label : person Probability : 12.842868 Classes : 20 }, ], predictions : [ ] }, ] }