GStreamer Support on Rubik Pi 3

From RidgeRun Developer Wiki

Follow Us On Twitter LinkedIn Email Share this page






Preferred Partner Logo 3 Partner Program Banner



k


RUBIK Pi 3 is a strong GStreamer platform because the board combines camera interfaces, GPU-backed transforms, AI-oriented plugins in the Ubuntu documentation stack, and enough local display / network I/O to build preview, record, inference, and streaming pipelines on one board. For most developers, GStreamer is the fastest way to turn RUBIK Pi 3 from “a board that boots” into “a product that moves video correctly”.

This page is part of Rubik Pi 3. It focuses on camera capture, transforms, display, recording, AI-assisted multimedia, and RidgeRun GStreamer integration on the QCS6490-based board.

Why GStreamer matters on RUBIK Pi 3

GStreamer matters because most edge AI products are not just AI products. They are camera products, display products, record / stream products, or robotics products that happen to include inference. GStreamer provides a graph-based way to compose those stages cleanly.

On current vendor Ubuntu documentation, the software stack references:

  • `qtiqmmfsrc` for camera capture,
  • `qtivtransform` for accelerated transform operations such as color conversion, crop, and resize,
  • `qtimltflite` for TensorFlow Lite model execution on the NPU in Qdemo-style workflows,
  • and standard GStreamer sinks, encoders, muxers, and utility elements.

Typical pipeline patterns

A practical way to think about RUBIK Pi 3 pipelines is to group them by intent:

Common pipeline intents
Intent Typical stages
Preview camera → colorspace / memory handling → display
Snapshot camera → conversion → JPEG encode → file sink
Record camera → transform / overlay → encoder → mux → file sink
AI preview camera → resize / preprocess → inference → overlay → display
Network stream camera → transform / encode → RTP / RTSP / WebRTC sink

Camera capture

Validated capture example

The current official camera documentation provides a straightforward image-capture example using `qtiqmmfsrc` and `jpegenc`.

gst-launch-1.0 -e qtiqmmfsrc camera=0 ! \
  video/x-raw,format=NV12,width=1280,height=720,framerate=30/1 ! \
  queue ! jpegenc ! queue ! \
  multifilesink location=/opt/img0_%d.jpg max-files=5

This is a good first validation because it proves several things at once:

  • the sensor is detected,
  • the CSI link is working,
  • the software stack can produce frames,
  • and the pipeline is stable enough to write files.

Preview pipeline pattern

A simple live-preview pipeline usually follows this structure:

qtiqmmfsrc ! capsfilter ! queue ! sink

Template:Fact check required Validate and publish one exact preview pipeline on the current Ubuntu image before adding it as a canonical copy-paste command. Display sinks, memory caps, and zero-copy behavior may differ across image releases.

Transform, resize, and zero-copy considerations

In AI and multimedia products, the expensive part is often not inference itself but everything around it: resize, color conversion, crop, frame copies, synchronization, and rendering. Current Qdemo documentation highlights `qtivtransform` as the component used for accelerated transform work such as color conversion, cropping, and resizing.

That is important because well-designed transform stages can reduce CPU load, improve frame stability, and make the downstream AI runtime easier to feed.

Template:Add benchmark data Add measured CPU, memory, and FPS comparisons between software-only transforms and accelerated transforms on a fixed image release and camera resolution.

AI-assisted multimedia pipelines

The Ubuntu Qdemo documentation describes a software path where capture, transform, inference, and rendering are orchestrated through GStreamer-oriented components. This is the right mental model for RUBIK Pi 3 AI development: the AI model is one stage in a multimedia graph, not a separate world.

Template:Insert image (images/rubik pi 3 ai pipeline.png - generated camera-to-inference pipeline diagram)

Example conceptual flow:

Camera (qtiqmmfsrc)
    ↓
Transform / resize (qtivtransform)
    ↓
Inference runtime
    ↓
Overlay / metadata rendering
    ↓
Display, recording, or streaming output

See also Rubik Pi 3/AI and Computer Vision.

Recording and streaming

Recording

A robust recording pipeline typically adds the following stages beyond preview:

camera → transform → encoder → parser → muxer → filesink

Whether you use software or hardware-backed encode paths depends on the image and plugin availability. The right choice should be validated against your specific image version, resolution, latency target, and storage medium.

Streaming

For products that need browser, server, or device-to-device delivery, RidgeRun commonly builds on GStreamer to expose RTSP, RTP, and WebRTC paths. Those same design patterns translate well to QCS6490-class platforms.

Useful RidgeRun references:

RidgeRun GStreamer technologies relevant to RUBIK Pi 3

GstInference

GstInference is RidgeRun's framework for embedding deep learning inference directly into GStreamer pipelines. Even when a project starts from vendor sample applications, GstInference is useful as an architectural reference for building maintainable inference-enabled media graphs.

GstWebRTC

Introduction to RidgeRun's GstWebRTC and related streaming documentation are relevant when the product needs live browser delivery, operator interfaces, or remote preview.

Camera-driver and sensor integration

GStreamer pipelines are only as reliable as the underlying capture path. If the project depends on a non-reference sensor or custom module, combine this page with RidgeRun Linux Camera Drivers and Mira220 Camera Driver for Rubik Pi 3.

Debugging GStreamer on RUBIK Pi 3

When a pipeline fails, debug in this order:

  1. verify board power and image version,
  2. verify the camera module and cable orientation,
  3. validate capture with the simplest possible pipeline,
  4. confirm caps and frame size assumptions,
  5. add transforms one stage at a time,
  6. then add inference, encode, or network sinks.

Good debug habits include:

  • logging `/etc/os-release`,
  • recording exact pipeline strings,
  • capturing kernel / journal messages,
  • and testing with one known-good camera module before introducing custom hardware.

For general help, see GStreamer.

Common issues

Could not initialise Wayland output

The pipeline fails with:

0:00:00.066303490  1824   0x55868e9e00 WARN             waylandsink gstwaylandsink.c:385:gst_wayland_sink_find_display:<waylandsink0> warning: Could not initialise Wayland output
0:00:00.066341250  1824   0x55868e9e00 WARN             waylandsink gstwaylandsink.c:385:gst_wayland_sink_find_display:<waylandsink0> warning: Failed to create GstWlDisplay: 'Failed to connect to the wayland display '(default)''

Solution: Export the display environment variables:

export XDG_RUNTIME_DIR=/dev/socket/weston  
export WAYLAND_DISPLAY=wayland-1

Failed to Open Camera

The pipeline fails with:

Setting pipeline to PAUSED ...
0:00:00.083557969  1868   0x558a260e50 ERROR             qtiqmmfsrc qmmf_source_context.cc:1458:gst_qmmf_context_open: QMMF Recorder StartCamera Failed!
0:00:00.083686823  1868   0x558a260e50 WARN              qtiqmmfsrc qmmf_source.c:1217:qmmfsrc_change_state:<qmmfsrc0> error: Failed to Open Camera!
ERROR: from element /GstPipeline:pipeline0/GstQmmfSrc:qmmfsrc0: Failed to Open Camera!
Additional debug info:
/usr/src/debug/qcom-gstreamer1.0-plugins-oss-qmmfsrc/1.0/qmmf_source.c(1217): qmmfsrc_change_state (): /GstPipeline:pipeline0/GstQmmfSrc:qmmfsrc0
ERROR: pipeline doesn't want to preroll.
Failed to set pipeline to PAUSED.
Setting pipeline to NULL ...
Freeing pipeline ...

Solution: There is not a single root cause, but these questions might help resolve the problem:

  1. Is the camera connected correctly?

Refer to the image below.

  1. Does the /var/cache/camera/camxoverridesettings.txt has the following contents?
root@rubikpi:~# cat /var/cache/camera/camxoverridesettings.txt
multiCameraLogicalXMLFile=kodiak_dc.xml
enableNCSService=FALSE

If the file does not exist, or has different contents, you can overwrite it as follows:

echo multiCameraLogicalXMLFile=kodiak_dc.xml > /var/cache/camera/camxoverridesettings.txt
echo enableNCSService=FALSE >> /var/cache/camera/camxoverridesettings.txt
systemctl restart cam-server.service

Then try again.

Suggested development workflow

1. Validate camera with qtiqmmfsrc + JPEG capture
2. Add live preview
3. Add transform / resize
4. Add inference or encode path
5. Add metadata overlay
6. Add network streaming or ROS 2 bridge
7. Measure CPU, FPS, latency, and memory

This staged approach avoids the common trap of debugging six variables at once.

Frequently asked questions

Can RUBIK Pi 3 run GStreamer pipelines?
Yes. Current official Ubuntu material includes camera and AI workflows built around GStreamer-oriented components, and the platform is well-suited to preview, record, inference, and streaming graphs.
What source element is used for cameras?
Current official camera documentation uses `qtiqmmfsrc` for CSI camera capture on the board.
How do I test the camera quickly with GStreamer?
Use a simple `qtiqmmfsrc` pipeline that captures a few JPEG frames to disk before attempting more complex preview or inference graphs.
Can I combine GStreamer with AI on RUBIK Pi 3?
Yes. The vendor Ubuntu stack and RidgeRun technologies both support the idea of inference as one stage inside a larger multimedia pipeline.
Where does RidgeRun help most with GStreamer on this platform?
RidgeRun typically helps with stable capture, zero-copy design, AI integration, overlays, low-latency streaming, and product-grade pipeline architecture.

Related pages


=

GstQtOverlay

export XDG_RUNTIME_DIR=/dev/socket/weston  
export WAYLAND_DISPLAY=wayland-1  
export QT_QPA_PLATFORM=wayland

export QT_QPA_PLATFORM=wayland-egl also works.

gst-launch-1.0 videotestsrc is-live=1 ! "video/x-raw,width=1280,height=720,format=RGBA" ! qtoverlay qml=main.qml ! queue ! videoconvert ! glimagesink

or

gst-launch-1.0 -e qtiqmmfsrc camera=0 antibanding=0 ! 'video/x-raw(memory:GBM),format=NV12,width=1280,height=720,framerate=30/1' ! qtivtransform ! qtoverlay qml=main_drone.qml ! glimagesink


or

gst-launch-1.0 -e qtiqmmfsrc camera=0 antibanding=0 ! 'video/x-raw(memory:GBM),format=NV12,width=1280,height=720,framerate=30/1' ! qtivtransform ! qtoverlay qml=main_drone.qml ! qtivtransform ! waylandsink fullscreen=true async=true