Jump to content

GstInference/Example pipelines/NANO: Difference between revisions

m
no edit summary
(Center image and update tinyYolo Image)
mNo edit summary
Line 94: Line 94:
*Output  
*Output  
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair]]
[[File:Inceptionv2 barber.png|center|thumb|inceptionv2_barberchair]]
== InceptionV1 ==
===RTSP Camera stream===
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
* You will need a v4l2 compatible camera
'''Server Pipeline''' which runs on the host PC
<syntaxhighlight lang=bash>
gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! queue ! x265enc option-string="keyint=30:min-keyint=30:repeat-headers=1" ! video/x-h265,  width=640, height=480, mapping=/stream1 ! queue ! rtspsink service=5000
</syntaxhighlight>
'''Client Pipeline''' which runs on the NANO board
<syntaxhighlight lang=bash>
CAMERA='/dev/video0'
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
INPUT_LAYER='input'
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
GST_DEBUG=inceptionv1:6 gst-launch-1.0 -e rtspsrc location="rtsp://<server_ip_address>:5000/stream1" ! queue ! rtph265depay ! queue ! h265parse ! queue ! omxh265dec ! queue ! nvvidconv ! queue ! net.sink_model inceptionv1 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
</syntaxhighlight>
* Output
<syntaxhighlight lang=bash>
</syntaxhighlight>


==TinyYoloV2==
==TinyYoloV2==
Cookies help us deliver our services. By using our services, you agree to our use of cookies.