481
edits
Line 299: | Line 299: | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
* LABELS and EMBEDDINGS files are in | * LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings. | ||
===Nvidia Camera === | ===Nvidia Camera === | ||
Line 308: | Line 308: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='output' | OUTPUT_LAYER='output' | ||
LABELS=' | LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | ||
EMBEDDINGS=' | EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | ||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \ | nvcamerasrc sensor-id=$SENSOR_ID ! 'video/x-raw(memory:NVMM),width=(int)1280,height=(int)720' ! nvvidconv ! 'video/x-raw,format=BGRx,width=(int)1280,height=(int)720' ! videoconvert ! tee name=t \ | ||
Line 325: | Line 325: | ||
INPUT_LAYER='input' | INPUT_LAYER='input' | ||
OUTPUT_LAYER='output' | OUTPUT_LAYER='output' | ||
LABELS=' | LABELS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/labels.txt' | ||
EMBEDDINGS=' | EMBEDDINGS='$PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings/embeddings.txt' | ||
gst-launch-1.0 \ | gst-launch-1.0 \ | ||
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ | v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! tee name=t \ |
edits