GStreamer example pipelines for Smart Parking use case.
Getting started with AI on NXP i.MX8M Plus RidgeRun documentation is currently under development. |
License Plate Detection with TinyYOLO version 3 pipeline
Prepare the image, model and labels location:
IMAGE_FILE=<your image>.jpg MODEL_LOCATION=<your path>/car_plate_tinyYolo_model2.tflite LABELS_LOCATION=<your path>/labels.txt
Using the CPU
GST_DEBUG="tinyyolov3:LOG,*inf*:DEBUG" gst-launch-1.0 multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model tinyyolov3 name=net number-of-classes=1 model-location=$MODEL_LOCATION backend=tflite labels="$(cat $LABELS_LOCATION)"
Using the NPU backend:
GST_DEBUG="tinyyolov3:LOG,*inf*:DEBUG" gst-launch-1.0 multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model tinyyolov3 name=net number-of-classes=1 model-location=$MODEL_LOCATION backend=nnapi labels="$(cat $LABELS_LOCATION)"
0:00:21.128664525 624 0xaaab0d856aa0 LOG tinyyolov3 gsttinyyolov3.c:292:gst_tinyyolov3_postprocess:<net> Postprocess Meta 0:00:21.128757025 624 0xaaab0d856aa0 LOG tinyyolov3 gsttinyyolov3.c:300:gst_tinyyolov3_postprocess:<net> Number of predictions: 1 0:00:21.128851651 624 0xaaab0d856aa0 LOG tinyyolov3 gstinferencedebug.c:38:gst_inference_print_predictions: { "id" : 40, "enabled" : "True", "bbox" : { "x" : 0, "y" : 0, "width" : 416, "height" : 416 }, "classes" : [ ], "predictions" : [ { "id" : 41, "enabled" : "True", "bbox" : { "x" : 93, "y" : 110, "width" : 242, "height" : 209 }, "classes" : [ { "Id" : 40, "Class" : 0, "Label" : "license", "Probability" : "1.000000", "Classes" : 1 } ], "predictions" : [ ] } ] } 0:00:21.129048153 624 0xaaab0d878d30 INFO videoinference gstvideoinference.c:454:gst_video_inference_stop:<net> Stopping video inference
License Plate Recognition with Rosetta model pipeline
The following license plate is going to be the example for this example:

IMAGE=<path to your image>.jpg MODEL_LOCATION=<path to the Rosetta model>/crnn_dr.tflite GST_DEBUG="*ros*:LOG,*inf*:DEBUG" gst-launch-1.0 multifilesrc location=$IMAGE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model rosetta name=net model-location=$MODEL_LOCATION backend=tflite
NOTE: For this model do not use the NNAPI backend, because the model is not supported in 8 or 16 bit quantization.
The result of this pipe should be something like the following:
0:00:30.457070712 593 0xaaaac0759aa0 LOG rosetta gstrosetta.c:252:gst_rosetta_postprocess:<net> Rosetta prediction is done 0:00:30.457089962 593 0xaaaac0759aa0 LOG rosetta gstrosetta.c:256:gst_rosetta_postprocess:<net> The phrase is manisa
Smash License Plate detection and License Plate Recognition together
Inferences over images
IMAGE=Cars33.jpg # TinyYOLO v3 model location TY_MODEL_LOCATION=car_plate_tinyYolo_model2.tflite LABELS=labels.txt # Rosetta model location RT_MODEL_LOCATION=crnn_dr.tflite gst-launch-1.0 tinyyolov3 name=net model-location=$TY_MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" multifilesrc location=$IMAGE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! net.sink_bypass net.src_model ! inferencecrop aspect-ratio=3/1 ! videoconvert ! videoscale ! videorate ! queue ! net1.sink_model rosetta name=net1 model-location=$RT_MODEL_LOCATION backend=tflite
Inference over videos
VIDEO=license_plate_video.mp4 # TinyYOLO v3 model location TY_MODEL_LOCATION=car_plate_tinyYolo_model2.tflite LABELS=labels.txt # Rosetta model location RT_MODEL_LOCATION=crnn_dr.tflite gst-launch-1.0 tinyyolov3 name=net model-location=$TY_MODEL_LOCATION backend=tflite labels="$(cat $LABELS)" filesrc location=$VIDEO ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! net.sink_bypass net.src_model ! queue ! inferencecrop aspect-ratio=3/1 ! videoconvert ! videoscale ! videorate ! queue ! net1.sink_model rosetta name=net1 model-location=$RT_MODEL_LOCATION backend=tflite