Jump to content

GstInference/Example pipelines/NANO: Difference between revisions

m
no edit summary
mNo edit summary
Line 5: Line 5:
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
-->
-->
{{DISPLAYTITLE:GstInference example pipelines for Jetson NANO|noerror}}


__TOC__
__TOC__
= Tensorflow =
== Tensorflow ==


== InceptionV4 ==
=== InceptionV4 ===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link].
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link].
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes].
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes].
* Use the following pipelines as examples for different scenarios.
* Use the following pipelines as examples for different scenarios.


===Image file===
====Image file====


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 36: Line 38:
</syntaxhighlight>
</syntaxhighlight>


===Video file===
====Video file====


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
Line 57: Line 59:
</syntaxhighlight>
</syntaxhighlight>


===Camera stream===
====Camera stream====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 77: Line 79:
</syntaxhighlight>
</syntaxhighlight>


===Visualization with classification overlay===
====Visualization with classification overlay====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
CAMERA='/dev/video0'
Line 95: Line 97:
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair]]
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair]]


== InceptionV1 ==
=== InceptionV1 ===


===RTSP Camera stream===
====RTSP Camera stream====


* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
Line 148: Line 150:
</syntaxhighlight>
</syntaxhighlight>


==TinyYoloV2==
===TinyYoloV2===
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
* You will need an image file from one of TinyYOLO classes
* You will need an image file from one of TinyYOLO classes


===Image file===
====Image file====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
IMAGE_FILE='cat.jpg'
IMAGE_FILE='cat.jpg'
Line 169: Line 171:
</syntaxhighlight>
</syntaxhighlight>


===Video file===
====Video file====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
VIDEO_FILE='cat.mp4'
VIDEO_FILE='cat.mp4'
Line 187: Line 189:
</syntaxhighlight>
</syntaxhighlight>


===Camera stream===
====Camera stream====
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with Nvidia Libargus API or V4l2.
Line 209: Line 211:
</syntaxhighlight>
</syntaxhighlight>


===Visualization with detection overlay===
====Visualization with detection overlay====
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
CAMERA='/dev/video1'
CAMERA='/dev/video1'
Line 227: Line 229:
[[File:TinyYolo barber chair label.png|center|thumb|tinyYolo barber chair by tinyYolo]]
[[File:TinyYolo barber chair label.png|center|thumb|tinyYolo barber chair by tinyYolo]]


==FaceNet==
===FaceNet===
===Visualization with detection overlay===
====Visualization with detection overlay====
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link]
* You will need a camera compatible with Nvidia Libargus API or V4l2.
* You will need a camera compatible with Nvidia Libargus API or V4l2.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.