Getting Started with ROS on Embedded Systems - Examples - V4L2 capture node
Getting Started with ROS on Embedded Systems RidgeRun documentation is currently under development. |
Getting Started with ROS on Embedded Systems |
---|
ROS on Embedded Systems Basics |
Getting Started |
C++ User Guide |
Examples |
Basic pipelines |
Performance |
Xavier |
Contact Us |
This page describes in detail the V4L2 ROS capture node developed by RidgeRun on C++ language.
ROS versions
Distribution: Melodic Morenia [1]
Build system: catkin [2]
Introduction
When using ROS with some capture subsystem, normally one would end up using ROS 2 v4l2 nodes, or end up using OpenCV [3] to capture from the camera. The problem starts when you just want to use the raw data and publish it. This V4L2 capture node does essentially, just capture the raw data using the v4l2 API within a capture library based slightly on Yavta application[4] and then creates a node receives a callback from each frame that can be raw or compressed (MJPEG) and publishes it.
Therefore, this node is actually composed of two elements:
- A capture library using v4l2 API in C++
- A ROS node that uses the capture library
Getting the code
Contact support@ridgerun.com for getting the code or any question you have.
Capture library
The capture library is a wrapper of the v4l2 API, it lets you create a capture object in which you can configure it's members and then execute the desired commands in order to start capturing from a device. The library gets a callback feature from each frame in order to process it as desired by the user.
Compilation
1. Export current directory to ease all the installation process for the guide
export DEVDIR=`pwd`
Then compile using cmake and make.
# Create and move to build directory cd $DEVDIR/lib mkdir build && cd build # Configure project for local compilation cmake .. -DBUILD_TESTS=ON -DLIBDIR_INSTALL_PATH=`pwd`/usr/local/lib -DBINDIR_INSTALL_PATH=`pwd`/usr/local/bin -DINCLUDE_INSTALL_PATH=`pwd`/usr/local/include -DCMAKE_TARGET_INSTALL_PATH=`pwd`/usr/local/lib # Compile make install
Testing capture library
The compilation will have created a sample application that uses the library
You can then use this application with this command for example:
cd $DEVDIR/lib/build LD_LIBRARY_PATH=./usr/local/lib/ ./usr/local/bin/v4l2-capture-sample /dev/video0 -c200000 -fMJPEG --cap-logs
The options from the application are the following:
LD_LIBRARY_PATH=./usr/local/lib/ ./usr/local/bin/v4l2-capture-sample Usage: ./usr/local/bin/v4l2-capture-sample [options] device Supported options: -c, --capture[=nframes] Capture frames, use 0 or less for infinite capture -f, --format format Set the video format -F, --file[=prefix] Read/write frames from/to disk -h, --help Show this help screen -n, --nbufs n Set the number of video buffers maximun is 32 -r, --get-control ctrl Get control 'ctrl' -s, --size WxH Set the frame size -w, --set-control 'ctrl value' Set control 'ctrl' to 'value' --save-frames Indicates frame interval to save a frame --cap-logs Enables capture logging
ROS capture node
The ROS capture node makes usage of the capture library, most importantly of the callback feature in order to be able to post the image into a topic, whether it is a raw image published as a sensor_msgs::Image[5] or sensor_msgs::CompressedImage[6]
Compilation of ROS node
To compile the capture node, assuming you have ROS already installed on your system (Getting Started), and that you also have the capture library installed and the directory exported in the DEVDIR variable, do:
cd $DEVDIR/ros export CMAKE_PREFIX_PATH=${CMAKE_PREFIX_PATH}:$DEVDIR/lib/build/usr/local/lib/cmake/v4l2-capture-project
Then compile
catkin_make -DCMAKE_CXX_FLAGS="-I $DEVDIR/lib/build/usr/local/include
Now you need to source the packages:
source $DEVDIR/ros/devel/setup.sh
Configuration of ROS node
You find a numerous configuration variables inside the $DEVDIR/ros/configuration/config.yaml
file:
1. The camera namespace, which will be an arbitrary name you can use for the camera topic
camera_namespace : "cam0"
2. The ROS topic names, you can modify both these, but if you are planning to use compressed images capture see the note below
ros_topic_names: { image_topic: "image", image_topic_compressed: "image/compressed", }
3. ROS topic queue sizes
ros_topic_queue_sizes: { image_topic: 5, image_topic_compressed: 5, }
4. Video device to capture from
device_name : "/dev/video0"
5. Format to capture from the camera, also used from the node to know if the capture format is compressed or not.
Formats supported now: "UYVY" and "MJPEG", feel free to modify the node and capture library as needed.
v4l2_format_string : "MJPEG"
6. Dimensions for capture
width : 640 height : 360
NOTE: The node doesn't actually use any image transport provided by ROS, it is as basic that it publishes sensor_msgs::Image and sensor_msgs::CompressedImage messages. To be consistent with the method ROS uses with image transport the ros_topic_names holds a raw and a compressed topic that will be used depending on the format of the specified v4l2_format_string format for the capture. So if you want to change this variable, leave the /compressed at the end. |
Testing ROS capture node
To launch the node:
roslaunch rr_cap capture.launch
On another terminal:
To view RAW uncompressed formats, you can use:
rosrun image_view image_view image:=/rr_capture/cam0/image
To view compressed formats likes MJPEG:
rosrun image_view image_view image:=/rr_capture/cam0/image _image_transport:=compressed
References