20,290
edits
mNo edit summary |
mNo edit summary |
||
Line 5: | Line 5: | ||
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}} | {{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}} | ||
--> | --> | ||
{{DISPLAYTITLE:GstInference TX2 GStreamer pipelines|noerror}} | |||
__TOC__ | __TOC__ | ||
= Tensorflow = | == Tensorflow == | ||
== Inceptionv4 inference on image file using Tensorflow == | === Inceptionv4 inference on image file using Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
* You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes] | * You will need an image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes] | ||
Line 38: | Line 40: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Inceptionv4 inference on video file using TensorFlow == | === Inceptionv4 inference on video file using TensorFlow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
Line 68: | Line 70: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Inceptionv4 inference on camera stream using TensorFlow == | === Inceptionv4 inference on camera stream using TensorFlow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
===Nvidia Camera=== | ====Nvidia Camera==== | ||
* Pipeline | * Pipeline | ||
Line 85: | Line 87: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
===V4L2=== | ====V4L2==== | ||
* Pipeline | * Pipeline | ||
Line 116: | Line 118: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
==Inceptionv4 visualization with classification overlay Tensorflow== | ===Inceptionv4 visualization with classification overlay Tensorflow=== | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv4-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
===Nvidia Camera=== | ====Nvidia Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 135: | Line 137: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
===V4L2=== | ====V4L2==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 155: | Line 157: | ||
[[File:Tx2-snap-classification.png|500px|center|thumb|Example classification overlay output]] | [[File:Tx2-snap-classification.png|500px|center|thumb|Example classification overlay output]] | ||
== TinyYolov2 inference on image file using Tensorflow == | === TinyYolov2 inference on image file using Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need an image file from one of TinyYOLO classes | * You will need an image file from one of TinyYOLO classes | ||
Line 185: | Line 187: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== TinyYolov2 inference on video file using Tensorflow == | === TinyYolov2 inference on video file using Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need a video file from one of TinyYOLO classes | * You will need a video file from one of TinyYOLO classes | ||
Line 214: | Line 216: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== TinyYolov2 inference on camera stream using Tensorflow == | === TinyYolov2 inference on camera stream using Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
===Nvidia Camera=== | ====Nvidia Camera==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 229: | Line 231: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
===V4L2=== | ====V4L2==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 256: | Line 258: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== TinyYolov2 visualization with detection overlay Tensorflow == | === TinyYolov2 visualization with detection overlay Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
===Nvidia Camera === | ====Nvidia Camera ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 276: | Line 278: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
===V4L2 === | ====V4L2 ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 296: | Line 298: | ||
[[File:Tx2-snap-detection.png|500px|center|thumb|Example detection overlay output]] | [[File:Tx2-snap-detection.png|500px|center|thumb|Example detection overlay output]] | ||
== FaceNet visualization with embedding overlay Tensorflow == | === FaceNet visualization with embedding overlay Tensorflow === | ||
* Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | * Get the graph used on this example from [https://shop.ridgerun.com/products/facenetv1-for-tensorflow this link] | ||
* You will need a camera compatible with Nvidia Libargus API or V4l2. | * You will need a camera compatible with Nvidia Libargus API or V4l2. | ||
* LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings. | * LABELS and EMBEDDINGS files are in $PATH_TO_GST_INFERENCE_ROOT_DIR/tests/examples/embedding/embeddings. | ||
===Nvidia Camera === | ====Nvidia Camera ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> | ||
Line 318: | Line 320: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
===V4L2 === | ====V4L2 ==== | ||
* Pipeline | * Pipeline | ||
<syntaxhighlight lang=bash> | <syntaxhighlight lang=bash> |