GstInference/Example pipelines/TX2: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 299: Line 299:
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* LABELS and EMBEDDINGS files are in <PATH_TO_GST_INFERENCE_ROOT_DIR>/tests/examples/embedding/embeddings.
===Nvidia Libargus API ===
===Nvidia Libargus API ===
* Pipeline
* Pipeline
Line 306: Line 308:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='output'
OUTPUT_LAYER='output'
LABELS='facenet.txt'
LABELS='<PATH_TO_GST_INFERENCE_ROOT_DIR>/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='<PATH_TO_GST_INFERENCE_ROOT_DIR>/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \
gst-launch-1.0 \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \
Line 312: Line 315:
t. ! queue ! net.sink_bypass \
t. ! queue ! net.sink_bypass \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>


Line 322: Line 325:
INPUT_LAYER='input'
INPUT_LAYER='input'
OUTPUT_LAYER='output'
OUTPUT_LAYER='output'
LABELS='facenet.txt'
LABELS='<PATH_TO_GST_INFERENCE_ROOT_DIR>/tests/examples/embedding/embeddings/labels.txt'
EMBEDDINGS='<PATH_TO_GST_INFERENCE_ROOT_DIR>/tests/examples/embedding/embeddings/embeddings.txt'
gst-launch-1.0 \
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \
Line 328: Line 332:
t. ! queue ! net.sink_bypass \
t. ! queue ! net.sink_bypass \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
facenetv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
net.src_bypass ! videoconvert ! embeddingoverlay labels="$(cat $LABELS)" embeddings="$(cat $EMBEDDINGS)" font-scale=4 thickness=4 ! videoconvert ! xvimagesink sync=false
</syntaxhighlight>
</syntaxhighlight>


351

edits