Jump to content

GstInference/Example pipelines/NANO: Difference between revisions

m
Change titles and and previous and next section
(Add main pipelines examples for the nano)
m (Change titles and and previous and next section)
Line 1: Line 1:
 
<noinclude>
{{GstInference/Head|previous=Example pipelines/PC|next=Example pipelines/TX2|keywords=GstInference}}
</noinclude>
<!-- If you want a custom title for the page, un-comment and edit this line:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->


= Tensorflow =
= Tensorflow =
Line 8: Line 13:
* Use the following pipelines as examples for different scenarios.
* Use the following pipelines as examples for different scenarios.


===Image===
===Inceptionv4 inference on image file using Tensorflow===


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 30: Line 35:
</syntaxhighlight>
</syntaxhighlight>


===Video===
===Inceptionv4 inference on video file using TensorFlow===


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 51: Line 56:
</syntaxhighlight>
</syntaxhighlight>


===videocamera===
===Inceptionv4 inference on camera stream using TensorFlow===
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 72: Line 77:


==TinyYOLOV2==
==TinyYOLOV2==
===Image===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need an image file from one of TinyYOLO classes
 
===TinyYolov2 inference on image file using Tensorflow===
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
IMAGE_FILE='cat.jpg'
IMAGE_FILE='cat.jpg'
Line 89: Line 97:
</syntaxhighlight>
</syntaxhighlight>


===Video===
===TinyYolov2 inference on video file using Tensorflow===
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
VIDEO_FILE='cat.mp4'
Line 107: Line 115:
</syntaxhighlight>
</syntaxhighlight>


===videocamera===
===TinyYolov2 inference on camera stream using Tensorflow===
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings.
 
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
61

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.