ROS2 support on Rubik Pi 3
The Rubik Pi 3 documentation from RidgeRun is presently being developed. |
RUBIK Pi 3 is a practical ROS 2 edge-compute node because it combines Ubuntu support, camera interfaces, AI capability, and vendor robotics software references on one QCS6490-based board. In real projects, that means the board can act as a perception computer, a smart sensor head, a teleoperation endpoint, or a compact robot controller that still has enough multimedia capability for modern vision workloads.
This page is part of Rubik Pi 3. It focuses on ROS 2 architectures, QIR-oriented robotics software, and how RidgeRun typically integrates cameras, GStreamer, streaming, and robotics logic on the platform.
Why use ROS 2 on RUBIK Pi 3?
A direct answer for “Can I use ROS 2 on RUBIK Pi 3?” is yes: the Ubuntu-oriented robotics documentation references a QIR SDK flow with reference ROS packages and sample applications, which makes the board relevant for robotics workloads instead of only standalone AI demos.
ROS 2 is especially valuable on this board when you need to coordinate:
- camera capture,
- AI inference,
- actuator or motion logic,
- remote monitoring,
- and system-level message passing.
QIR SDK and robotics samples
The current Ubuntu robotics documentation describes the Qualcomm Intelligent Robotics (QIR) SDK as providing essential components for robotics development on Qualcomm platforms under Ubuntu. The documented highlights include:
- reference ROS packages,
- end-to-end scenario samples,
- and QRB ROS transport for zero-copy message transport on Qualcomm robotics platforms.
That is useful even on RUBIK Pi 3 because it gives developers a starting point for structure, packaging, and integration patterns.
Current setup direction
The documented setup flow for Ubuntu references:
sudo add-apt-repository ppa:ubuntu-qcom-iot/qcom-ppa sudo add-apt-repository ppa:ubuntu-qcom-iot/qirp sudo apt install qirp-sdk source /usr/share/qirp-setup.sh
Template:Fact check required Re-validate the QIR package names and repository instructions against the exact Ubuntu image release used for publication.
Typical ROS 2 architecture on RUBIK Pi 3
A common system architecture looks like this:
CSI / USB camera node
↓
Image transport / preprocessing
↓
Perception node (detection / segmentation / tracking)
↓
Decision or control node
↓
Robot interface / actuator node
↓
Optional remote UI, logging, or streaming
This structure works well on RUBIK Pi 3 because the same board can host perception and local video handling while still exporting telemetry, state, and control interfaces through ROS 2.
Cameras and ROS 2
ROS 2 systems on RUBIK Pi 3 usually begin with camera integration. There are two broad ways to think about it:
Native ROS 2 first
Use a ROS 2 camera node or bridge and treat the camera as a ROS-native publisher. This is often preferred when the robotics stack is dominant and multimedia complexity is limited.
GStreamer first
Use GStreamer for capture, transforms, overlays, or encode / stream work, then bridge selected outputs or metadata into ROS 2. This is often better when video quality, latency, or streaming requirements are more demanding.
For many products, the best design is hybrid: GStreamer handles pixels efficiently, while ROS 2 handles messages, metadata, system coordination, and robot control.
See Rubik Pi 3/GStreamer.
Minimal ROS 2 workspace flow
A general Ubuntu-native ROS 2 workflow on the board usually looks like this:
mkdir -p ~/ros2_ws/src cd ~/ros2_ws colcon build source install/setup.bash
From there, the exact package set depends on the application: camera drivers, robot interfaces, teleoperation, perception, mapping, or custom application nodes.
Template:Insert code snippet Add a validated ROS 2 package example once the editorial team decides whether to standardize on Python or C++ for this page.
Example use patterns
Smart sensor node
RUBIK Pi 3 can act as a standalone smart sensor that publishes detections, class labels, tracking outputs, or compressed images to a larger robot or edge system.
Perception front-end
The board can sit near the cameras and preprocess or infer locally, reducing the bandwidth and latency burden on a higher-level controller.
Teleoperation endpoint
When combined with streaming and operator interfaces, the board can function as the robot-side multimedia and control endpoint for remote systems. RidgeRun's RidgeRun Immersive Teleoperation is a relevant architectural reference.
Evaluation robot brain
For early-stage robotics teams, the board can be the main compute node while sensors, controllers, and actuators are still being finalized.
ROS 2 and multimedia coexistence
The main design challenge is not whether ROS 2 runs, but how it coexists with high-rate video. On embedded boards, performance problems usually come from unnecessary frame copies, poorly chosen transport formats, or too much logic living in the wrong layer.
A clean rule of thumb is:
- keep image movement efficient,
- keep ROS 2 messages meaningful,
- and avoid turning ROS 2 into a replacement for a good multimedia pipeline.
This is why RidgeRun often designs systems where GStreamer owns the video graph and ROS 2 owns application orchestration.
RidgeRun robotics integration
RidgeRun is especially useful when a robotics project needs more than a demo:
- stable camera capture under real workloads,
- synchronization between ROS 2 and multimedia,
- AI inference embedded in the media path,
- operator interfaces over RTSP or WebRTC,
- or a teleoperation stack that must remain responsive and maintainable.
Relevant RidgeRun references:
- RidgeRun Immersive Teleoperation
- GStreamer
- GstInference
- RidgeRun Multimedia Streaming Solutions: RTSP, WebRTC, RTP, and ONVIF Integration Tools
Key takeaways
- RUBIK Pi 3 is a credible ROS 2 edge node, not just an AI demo board.
- QIR-oriented Ubuntu documentation gives the platform a meaningful robotics software path.
- The best designs usually combine ROS 2 with a well-structured GStreamer layer.
- Camera, inference, control, and streaming should be designed as one system.
- RidgeRun can help when the project needs to become robust, low-latency, and product-oriented.
Frequently asked questions
- Does RUBIK Pi 3 support ROS 2?
- Yes. Current Ubuntu robotics documentation references ROS-oriented QIR SDK workflows and sample applications.
- Should I use ROS 2 or GStreamer for the camera path?
- Use GStreamer when pixel movement, transforms, overlays, encoding, or streaming matter most; use ROS 2 for system coordination, metadata, and control. Many systems use both.
- Can RUBIK Pi 3 be used for teleoperation?
- Yes. The board is well suited to camera-plus-control systems, especially when paired with ROS 2 and low-latency streaming architecture.
- What is the first ROS 2 milestone on this board?
- A good first milestone is to publish camera-derived data or inference outputs reliably while keeping CPU use, latency, and frame drops under control.
- When should I ask RidgeRun for help?
- Ask RidgeRun when your project depends on camera stability, low-latency video, ROS 2 integration, AI in the pipeline, or a clear path to productization.
Related pages
- Rubik Pi 3
- Rubik Pi 3/GStreamer
- Rubik Pi 3/AI and Computer Vision
- Rubik Pi 3/Use Cases
- RidgeRun Immersive Teleoperation