GstInference Quick Starting Guide

From RidgeRun Developer Wiki



Previous: Getting started/Building the plugin Index Next: Supported architectures





Introduction

This wiki is intended to give a quick and easy guide to get a simple GstInference example running on either a computer running Ubuntu 18.04 or a NVIDIA Jetson TX2.

Setting up the development environment

In this section, you will find instructions on how to get your development environment ready to start using GstInference on your projects.

R2Inference

R2Inference is an open-source project by RidgeRun that serves as an abstraction layer in C/C++ for a variety of machine learning frameworks. The R2Inference library is required by GstInference as an interface with the machine learning backends.

R2Inference currently supports the following backends, however, in this guide, Tensorflow will be used. For that reason, it will be required for the development environment to have the Tensorflow C API, which can be installed by following this installation instructions.

When building the R2Inference library by following this installation guide, please make sure to enable the desired backend with its corresponding configuration option.

GstInference

In order to get any project running with GstInference, you must first have its plugin installed on your development system. Installing the plugin is a very simple task that can be accomplished by following it's short installation guide.

Examples

Using a GStreamer Pipeline

The simplest way to run a GstInference application corresponds to the use of a GStreamer pipeline. Let's say we want to get a detection network running, so we choose TinyYoloV3 as the network we are going to use. Now, you can either use your own model for this example or you can choose one from our model zoo, where you can find different trained models for different neural networks.

PC Pipeline

YOLO_MODEL_LOCATION='<yolo models location>'
YOLO_INPUT_LAYER='<models input layer>'
YOLO_OUTPUT_LAYER='<models output layer>'
CAMERA='<device location>'

gst-launch-1.0 \
tinyyolov3 name=ynet model-location=$YOLO_MODEL_LOCATION backend=tensorflow backend::input-layer=$YOLO_INPUT_LAYER  backend::output-layer=$YOLO_OUTPUT_LAYER \
v4l2src device=$CAMERA ! 'video/x-raw, width=640, height=(int)480, framerate=(fraction)30/1' ! videoconvert ! tee name=t \
t. ! queue leaky=2 ! videoscale add-borders=true ! queue leaky=2 ! ynet.sink_model \
t. ! queue leaky=2 ! ynet.sink_bypass \
ynet.src_bypass ! queue leaky=2 ! inferenceoverlay font-scale=0 thickness=2 ! ximagesink sync=false async=false

YOLO_MODEL_LOCATION This option simply corresponds to the path where your model is located, including the model's name. Since we are using Tensorflow, the model must be a .pb file. For example, if your model is located at /home/usr/models, then you should fill said option as YOLO_MODEL_LOCATION=/home/usr/models/your_model.pb .

YOLO_INPUT_LAYER This corresponds to the name of the input layer of the network, you must know that information. For instance, if you check our model zoo, you will find a chart with the model specifications which will contain the name of the input layer by the name of Input-node .

YOLO_OUTPUT_LAYER Same as the YOLO_INPUT_LAYER, this information is characteristic of your network, and you must know it. Again, if you are using a model from our model zoo, you will find that information under the name of Output-node .

CAMERA With this pipeline, the neural network is going to be processing live video feed from a camera. To specify the camera to use, you just have to write the device where it is located, for example, CAMERA=/dev/video0.

Nvidia Jetson TX2 Pipeline

Same as with the PC pipeline, we need to provide the model location and its input and output layers. In this case, the main difference corresponds to the use of the onboard camera by utilizing nvarguscamerasrc as input. Notice that depending on your network's configuration you might need to resize the input of the camera while making sure that the aspect ratio remains unchanged, for example, let's say that your tinyyolov3 model was trained with 416x416 images, then you can take the camera input of size 416x312 with an aspect ratio of 4:3 and add black borders with the nvcompositor element to get it to the desired size using GPU acceleration.

YOLO_MODEL_LOCATION='<yolo models location>'
YOLO_INPUT_LAYER='<models input layer>'
YOLO_OUTPUT_LAYER='<models output layer>'

gst-launch-1.0 \
tinyyolov3 name=ynet model-location=$YOLO_MODEL_LOCATION backend=tensorflow backend::input-layer=$YOLO_INPUT_LAYER  backend::output-layer=$YOLO_OUTPUT_LAYER \
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)416, height=(int)312, format=(string)NV12, framerate=(fraction)30/1 ! \
nvcom. nvcompositor name=nvcom sink_0::ypos=52 ! video/x-raw(memory:NVMM),width=416,height=416 ! nvvidconv ! tee name=t \
t. ! queue leaky=2 ! ynet.sink_model \
t. ! queue leaky=2 ! ynet.sink_bypass \
ynet.src_bypass ! queue leaky=2 ! inferenceoverlay font-scale=0 thickness=2 ! ximagesink sync=false async=false

Programming a C/C++ Application

Another way to use GSTInference on your project is by writing a C or C++ application able to start a GStreamer pipeline. This method is also very useful and highly recommended when a custom process is required to run over the results provided by the inference performed by a neural network. Same as the other example, you can use your own model or find a trained model on our model zoo.

Along with the source code that you get from the GstInference repository, you will find some examples. The examples directory contains two different options for you to play with, you can either set up a detection or a classification network.

Each one of the two examples contains two source files, one is named after the example and sets the environment up and starts the pipeline, the other one is named customlogic.c and is meant to contain every single process that requires to be performed over the networks output. Since the two examples follow the same recipe, we will explain only the detection example.

gstdetection.c

In this file, you can customize different aspects of the application. First of all, you can set up your own pipeline for the program to execute. This can be done by modifying the gst_detection_create_pipeline function. In the same file, you can also change the program arguments or even the process that must be executed over the inference results. In the example program, the post process function corresponds to handle_prediction, which is defined in the customlogic.cc file, and it is called from the gst_detection_process_inference function in gstdetection.c file.

customlogic.cc

In this file, you will find the function called in gstdetection.c from the gst_detection_process_inference to process the resulting information from the inference process performed by the neural network. The example includes a simple function that prints information such as coordinates, size, and the probability of the bounding boxes that show the detected objects in the detection example.


Previous: Getting started/Building the plugin Index Next: Supported architectures