GstInference Releases

From RidgeRun Developer Wiki




Previous: Project Status Index Next: Project Status/Roadmap




This page summarizes all GstInference and R2Inference releases. Check GitHub for more info!

v0.1.0

Introduced features:

  • Video inference base class
  • Support for the following architectures:
    • GoogLeNet (Inception v4)
    • Tiny Yolo v2
  • Support for the following backends:
    • NCSDK
    • TensorFlow
  • Support for the following platforms:
    • Intel Movidius Neural Compute Stick (version 1)
    • NVIDIA Jetson AGX Xavier
    • x86 systems
  • Set backend from GStreamer
  • Set backend properties from GStreamer
  • Set model location from GStreamer
  • Support play-stop-play transitions on GstInference elements

Known issues:

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes transitions when using the NCSDK backend to fail after the second play.
  • Changing the backend on a stoped pipeline will fail with a segmentation fault.
  • When using the TensorFlow backend compiled with GPU support, the pipeline will sometimes fail to start.
  • The plugins throw segmentation fault when using nvarguscamerasrc as input on Xavier.

v0.2.0

Introduced features:

  • Stability improved.
  • Inference on InceptionV4 improved.
  • Metadata structure created to save the inference result.
  • Metadata is attached to model buffer and bypass buffer (if available).
  • Signal created to use the inference result at the application level.
  • The inference result is not displayed by the inference element unless the log level is set to 6.
  • Pads control improved, bypass pad is now optional.
  • GoogleNet renamed to InceptionV4.
  • TinyYolo renamed to TinyYoloV2.
  • Jetson TX2 added to supported architectures.
  • Examples section improved to smaller pipelines.
  • Overlay plugins added:
    • GstClassificationOverlay for InceptionV4 metadata.
    • GstDetectionOverlay for TinyYoloV2 metadata.
  • Example applications were added.
  • The segmentation fault with NVarguscamerasrc is fixed.
  • TensorFlow is able to run on CPU and GPU.

Known issues

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes transitions when using the NCSDK backend to fail after the second play.
  • Changing the backend on a stoped pipeline will fail with a segmentation fault.

v0.3.0

Introduced features:

  • Fixed bug with gtk-doc compilation on GStreamer 1.8
  • Support for the following architectures:
    • Inception v3
    • Inception v2
    • Inception v1

Known issues

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes transitions when using the NCSDK backend to fail after the second play.
  • Changing the backend on a stoped pipeline will fail with a segmentation fault.

v0.4.0

Introduced features:

  • Support on Inception and TinyYOLO elements for the following image formats:
    • RGBx
    • RGBA
    • xRGB
    • ARGB
    • BGR
    • BGRx
    • BGRA
    • xBGR
    • ABGR
  • Support for FaceNet architecture
  • Added Embedding overlay plugin for face detection visualization

Known issues

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes transitions when using the NCSDK backend to fail after the second play.
  • Changing the backend on a stoped pipeline will fail with a segmentation fault.

v0.5.0

Introduced features:

  • Support for the following architectures:
    • ResNet50V1
    • Tiny Yolo V3
    • FaceNetV1
    • MobileNetV2
  • Support for the following backends:
    • NCSDK
    • TensorFlow
  • Support for the following platforms:
  • Intel Movidius Neural Compute Stick (version 1)
  • NVIDIA Jetson AGX Xavier
  • x86 systems
  • NVIDIA TX2
  • i.MX8

Known issues:

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.
  • Changing the backend on a stoped pipeline will fail with a segmentation fault.

v0.6.0

Introduced features:

  • Improved ResNet50V1 inference result
  • Preprocess and Postprocess factorized as general files.
  • Debug information factorized as a general file.
  • Tests added.
  • Improved internal caps negotiation on bypass pad.
  • Copyright license LGPL added.

Supported platforms:

  • Intel Movidius Neural Compute Stick (version 1)
  • NVIDIA Jetson AGX Xavier
  • NVIDIA TX2
  • NVIDIA Nano
  • x86 systems
  • i.MX8

Known issues:

  • NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.

v0.7.0

Introduced features:

  • Pkg-config support
  • License update to LGPL
  • New inference meta hierarchical approach
  • TFlite backend support
  • New elements using new meta:
    • Detectioncrop
    • Inferencefilter
    • Inferencedebug
    • Inferenceoverlay
  • Bug fixes

v0.7.1

Introduced features:

  • Revert hotfix in dev-0.6 that modify the number of predictions.

v0.8.0

Introduced features:

  • Add new inferenceutils plugin to absorb inferencecrop, inferencefilter and inferencedebug.
  • Show root prediction on inferenceoverlay.
  • Fix prediction size test.
  • Fix tinyyolo3 postprocess to use new meta.
  • Bug fixes

v0.9.0

Introduced features:

  • Fix Yolo probabilities in inferencemeta
  • Support for OpenCV 4
  • Support for doubles in backend properties
  • New inferencebin helper element


Previous: Project Status Index Next: Project Status/Roadmap