RidgeRun Linux Camera Drivers - Examples - RB5

From RidgeRun Developer Wiki
Revision as of 14:00, 30 September 2024 by Ofallas (talk | contribs) (→‎RB5 Capture SubSystem)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)





RB5 Capture SubSystem

The Qualcomm Robotics RB5 Development Kit is a platform designed for robotics development, featuring the Qualcomm QRB5165 processor. Customized for a wide range of robotics applications, this kit supports extensive prototyping with its 96Boards open hardware specification and mezzanine-board expansions. With 15 TOPS of compute power, advanced AI capabilities, and support for multiple cameras, it's used for creating autonomous robots and drones.

The next figure shows the basic hardware modules of the RB5 capture subsystem and their interconnections.

Image describing capture subsystem components for RB5 platform camera to build drivers


  • Camera Sensor

Communicates with the mainboard via MIPI CSI-2 protocol interfaces using C-PHY or D-PHY layers.

  • Image Front-End (IFE)

The ISP contains 2 Full IFEs for processing 25MP input resolution sensors and 5 IFE_lite for 2MP input resolution sensors. This modules provide Bayer processing for video/preview.

  • Bayer Processing Segment (BPS)

Processes the image data from the IFE, handle Bayer processing and downscaling, also support multiple format for flexible hybrid of hardware and software.

  • Image Processing Engine (IPE)

Further processes the image data, for example, image correction and adjustment, noise reduction, temporal filtering.


You can find more information on the hardware and software components of the capture subsystem in the following wiki.

OpenCV Examples

The following examples shows some basic Python code using OpenCV to capture from either a USB camera or a MIPI CSI camera. To use the code make sure you have OpenCV installed in your RB5 board, use this command to install it:

sudo apt-get install python3-opencv

After the installation was completed, simply run the command (camera_capture.py contains the code shown below):

python3 camera_capture.py

USB camera

The following Python code shows a basic example using OpenCV to capture from a USB camera:

import cv2


CAMERA_INDEX = 0
TOTAL_NUM_FRAMES = 100
FRAMERATE = 30.0


def is_camera_available(camera_index=0):
    cap = cv2.VideoCapture(camera_index)
    if not cap.isOpened():
        return False
    cap.release()
    return True


def main():

    if is_camera_available(CAMERA_INDEX):
        print(f"Camera {CAMERA_INDEX} is available.")
    else:
        print(f"Camera {CAMERA_INDEX} is not available.")
        exit()
    
    cap = cv2.VideoCapture(CAMERA_INDEX)
    
    if not cap.isOpened():
        print("Error: Could not open video stream.")
        exit()
    
    print("Camera opened successfully.")
    
    frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter('output.mp4', fourcc, FRAMERATE, (frame_width, frame_height))

    num_frames = 0
    
    while True:
        ret, frame = cap.read()
        if not ret:
            print("Error: Failed to capture image.")
            break
    
        out.write(frame)

        print(f"Recording: {num_frames} / {TOTAL_NUM_FRAMES}", end='\r')

        if num_frames >= TOTAL_NUM_FRAMES:
            print("\nCapture duration reached, stopping recording.")
            break

        num_frames += 1
    
    # Release the camera and the VideoWriter
    cap.release()
    out.release()


if __name__ == '__main__':
    main()

MIPI CSI camera

The following Python code shows a basic example using OpenCV to capture from a MIPI CSI camera:

import cv2


TOTAL_NUM_FRAMES = 100
FRAMERATE = 30.0
PIPELINE = "qtiqmmfsrc ! 'video/x-raw(memory:GBM),width=(int)1920,height=(int)1080,format=(string)NV12,framerate=(fraction)30/1' ! qtivtransform ! 'video/x-raw,format=(string)BGRx' ! appsink"


def is_camera_available():
    cap = cv2.VideoCapture(PIPELINE)
    if not cap.isOpened():
        return False
    cap.release()
    return True


def main():

    if is_camera_available():
        print("A camera is available.")
    else:
        print("No camera available.")
        exit()
    
    cap = cv2.VideoCapture(PIPELINE)
    
    if not cap.isOpened():
        print("Error: Could not open video stream.")
        exit()
    
    print("Camera opened successfully.")
    
    frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter('output.mp4', fourcc, FRAMERATE, (frame_width, frame_height))

    num_frames = 0
    
    while True:
        ret, frame = cap.read()
        if not ret:
            print("Error: Failed to capture image.")
            break
    
        out.write(frame)

        print(f"Recording: {num_frames} / {TOTAL_NUM_FRAMES}", end='\r')

        if num_frames >= TOTAL_NUM_FRAMES:
            print("\nCapture duration reached, stopping recording.")
            break

        num_frames += 1
    
    # Release the camera and the VideoWriter
    cap.release()
    out.release()


if __name__ == '__main__':
    main()
Note

For this example to work, GStreamer must be enabled in OpenCV. You can verify this by running the following command:

python3 -c 'import cv2; print(cv2.getBuildInformation())'

In the output, the Video I/O section should indicate GStreamer: YES.

  Video I/O:
    DC1394:                      YES (2.2.6)
    FFMPEG:                      YES
      avcodec:                   YES (58.134.100)
      avformat:                  YES (58.76.100)
      avutil:                    YES (56.70.100)
      swscale:                   YES (5.9.100)
      avresample:                NO
    GStreamer:                   YES (1.19.90)
    PvAPI:                       NO
    v4l/v4l2:                    YES (linux/videodev2.h)
    gPhoto2:                     YES

GStreamer Examples

The following GStreamer pipelines demonstrate how to capture video from a camera attached to the board. Please note that these pipelines require a MIPI CSI camera to function properly.

1080p@30 capture and display

gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw(memory:GBM),format=NV12,width=1920,height=1080,framerate=30/1" ! qtivtransform ! autovideosink sync=true

720p@60 capture and display

gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw(memory:GBM),format=NV12,width=1280,height=720,framerate=60/1" ! qtivtransform ! autovideosink sync=true

1080p@30 capture and h264 encoding

gst-launch-1.0 -e -v qtiqmmfsrc camera=0 ! "video/x-raw(memory:GBM),format=NV12,width=1920,height=1080,framerate=30/1,profile=high,level=(string)5.1" ! qtic2venc target-bitrate=8000000 ! h264parse ! queue ! mp4mux ! filesink location="video_capture_test_1080p.mp4"

720p@60 capture and h265 encoding

gst-launch-1.0 -e -v qtiqmmfsrc camera=0 ! "video/x-raw(memory:GBM),format=NV12,width=1280,height=720,framerate=60/1,profile=high,level=(string)5.1" ! qtic2venc target-bitrate=8000000 ! h264parse ! queue ! mp4mux ! filesink location="video_capture_test_720p.mp4"
Note
Check the RB5 dedicated wiki for more information and examples.

RidgeRun Product Use Cases

RidgeRun provides a suite of tools to enhance your multimedia projects using GStreamer. Below are some practical examples showcasing the capabilities of GstRtspSink and GstSEIMetadata. These pipelines are designed to stream video with embedded metadata over RTSP, offering an overview into how you can leverage these tools in your applications. To obtain these or any other product evaluations free of charge, please feel free to contact us.

Use Case Examples

GstRtspSink

  • Surveillance Systems: This product can be used to stream video to multiple monitoring stations simultaneously.
  • Live Broadcasting: For live events like sports, concerts, or webinars, GstRtspSink can stream the live video feed to multiple viewers.
  • Remote Monitoring in Industrial Applications: GstRtspSink enables streaming of video feeds from cameras monitoring the production line or remote sites.
  • Video Conferencing: In video conferencing systems, GstRtspSink can be used to stream the video feed from one participant to others.
  • Smart City Infrastructure: GstRtspSink can stream video feeds from cameras to control centers, allowing operators to monitor multiple locations effectively.

GstSEIMetadata

  • Drone Surveillance: The GstSEIMetadata element allows embedding the location data directly into the H264/H265 video frames.
  • Sports Broadcasting: In live sports broadcasts, data such as player statistics, game scores, or timing information can be embedded directly into the video stream using GstSEIMetadata.
  • Medical Imaging: The GstSEIMetadata element enables embedding data such as patient ID or procedure details into the video, ensuring that it stays synchronized with the visual information.
  • Augmented Reality (AR) Applications: In AR scenarios, additional information like object coordinates, scene descriptions, or environmental data can be embedded into the video stream using GstSEIMetadata.
  • Video Analytics: In security or retail environments where video analytics are performed, the GstSEIMetadata element can be used to inject data such as object detection results or motion tracking information into the video stream.


Note
If you would like to try these examples, feel free to contact us and request an evaluation version of the products, you will receive a .tar file with the evaluation version, along with additional instructions on how to install it.

To install the test evaluation, make sure to follow this section to install GstRtspSink, to install GstSEIMetadata you can follow the same steps with the corresponding .tar file. This example streams the video using the available hardware accelerators and from a MIPI CSI camera:

gst-launch-1.0 qtiqmmfsrc ! "video/x-raw(memory:GBM),format=NV12,width=1920,height=1080,framerate=30/1" ! queue leaky=2 max-size-buffers=5 ! qtic2venc control-rate=3 target-bitrate=3000000 idr-interval=15 ! "video/x-h264,mapping=/stream1" ! h264parse config-interval=-1 ! seimetatimestamp ! seiinject ! rtspsink service=8000

To receive the stream and display the video, use the following pipeline (Please note that you need to run the command on the board, as it is using the local host):

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8000/stream1 ! queue ! rtph264depay ! video/x-h264 ! h264parse ! queue max-size-buffers=1 ! h264parse ! qtivdec ! xvimagesink

To extract and analyze the embedded SEI metadata from the RTSP stream, use the following pipeline:

GST_DEBUG=*seiextract*:MEMDUMP gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8000/stream1 ! queue ! rtph264depay ! video/x-h264 ! h264parse ! seiextract ! fakesink

You should see output messages displaying the transmitted metadata. An example of these messages is shown below:

0:00:03.042074730 14884   0x55a9a900c0 MEMDUMP           seiextract gstseiextract.c:299:gst_sei_extract_extract_h264_data:<seiextract0> ---------------------------------------------------------------------------
0:00:03.042123273 14884   0x55a9a900c0 MEMDUMP           seiextract gstseiextract.c:299:gst_sei_extract_extract_h264_data:<seiextract0> The extracted data is: 
0:00:03.042169367 14884   0x55a9a900c0 MEMDUMP           seiextract gstseiextract.c:299:gst_sei_extract_extract_h264_data:<seiextract0> 00000000: 55 84 e5 b2 2b 16 00 00                          U...+...        
0:00:03.042204056 14884   0x55a9a900c0 MEMDUMP           seiextract gstseiextract.c:299:gst_sei_extract_extract_h264_data:<seiextract0> ---------------------------------------------------------------------------
Note

Discover more practical examples of using GstRtspSink on this page.

For additional examples of how to utilize GstSEIMetadata, check out the details on this page.


All Our Products

Explore our full range of products in the RidgeRun Store (click on the icon above). The store provides a list of all our products, pricing, and purchasing options for all RidgeRun solutions.

Download our Product Catalog (click on the icon above) for a more detailed overview of all available products.