Getting started with TI Jacinto 7 Edge AI - Demos - Python Demos - Semantic Segmentation
Getting started with TI Jacinto 7 Edge AI RidgeRun documentation is currently under development. |
Getting started with TI Jacinto 7 Edge AI | ||||||
---|---|---|---|---|---|---|
| ||||||
Introduction | ||||||
|
||||||
GStreamer | ||||||
|
||||||
Demos | ||||||
|
||||||
Reference Documentation | ||||||
Contact Us |
Semantic segmentation demo
Requirements
- A connected USB camera to the Jacinto board.
Run the semantic segmentation demo example
- Navigate to the python apps directory:
cd /opt/edge_ai_apps/apps_python
- Create a directory to store the output files:
mkdir out
- Select the right camera device:
To select the camera device corresponding to the USC camera or CSI camera being used, run the following command:
ls -l /dev/v4l/by-path/
The above command will output something like the following:
lrwxrwxrwx 1 root root 12 Jun 1 19:28 platform-xhci-hcd.2.auto-usb-0:1.2:1.0-video-index0 -> ../../video0 lrwxrwxrwx 1 root root 12 Jun 1 19:28 platform-xhci-hcd.2.auto-usb-0:1.2:1.0-video-index1 -> ../../video1
In this case, a symbolic link to /dev/video0 is created for the USB camera driver (try both symbolic links if one does not work).
- Run the demo:
./semantic_segmentation.py --device /dev/video0 -m ../models/segmentation/TVM-SS-569-fpnlite-aspp-regnetx400mf-ade20k32-384x384 -o ./out/sem_%d.jpg
Note: The %d above will be replaced by sequential numbers starting from 0. |
IMPORTANT: If the gstreamer pipeline does not open, your camera might not support 30 fps. If this is the case, please refer to the Change the default framerate (optional) section. |
- The demo will start running. The command line will look something like the following:
- Since this is a continuous live feed from the camera, manually stop the pipeline by typing Ctrl+C in the command line after you are happy with the number of frames taken.
- After the pipeline is stopped, the command line should display something like the following:
APP: Init ... !!! MEM: Init ... !!! MEM: Initialized DMA HEAP (fd=4) !!! MEM: Init ... Done !!! IPC: Init ... !!! IPC: Init ... Done !!! REMOTE_SERVICE: Init ... !!! REMOTE_SERVICE: Init ... Done !!! 21238.205484 s: GTC Frequency = 200 MHz APP: Init ... Done !!! 21238.205522 s: VX_ZONE_INIT:Enabled 21238.205530 s: VX_ZONE_ERROR:Enabled 21238.205536 s: VX_ZONE_WARNING:Enabled 21238.206050 s: VX_ZONE_INIT:[tivxInit:71] Initialization Done !!! 21238.206251 s: VX_ZONE_INIT:[tivxHostInit:48] Initialization Done for HOST !!! [UTILS] gst_src_cmd = v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=1280, height=720, framerate=60/1 ! jpegdec ! videoconvert ! appsink drop=true max-buffers=2 [UTILS] Gstreamer source is opened! [UTILS] gst_sink_cmd = appsrc format=GST_FORMAT_TIME block=true ! videoscale ! jpegenc ! multifilesink location=./out/sem_%d.jpg [UTILS] Gstreamer sink is opened! [UTILS] Starting pipeline thread
- Navigate to the out directory:
cd out
There should be several images named sem_<number>.jpg as a result of the semantic segmentation model.
- Figure 2 shows an example of how these images should look like:
There are multiple input and output configurations available. For example, in this example demo a live camera input and an image output was specified.
For more information about configuration arguments please refer to the Configuration arguments section below.
Configuration arguments
-h, --help show this help message and exit -m MODEL, --model MODEL Path to model directory (Required) ex: ./image_classification.py --model ../models/classification/$(model_dir) -i INPUT, --input INPUT Source to gst pipeline camera or file ex: --input v4l2 - for camera --input ./images/img_%02d.jpg - for images printf style formating will be used to get file names --input ./video/in.avi - for video input default: v4l2 -o OUTPUT, --output OUTPUT Set gst pipeline output display or file ex: --output kmssink - for display --output ./output/out_%02d.jpg - for images --output ./output/out.avi - for video output default: kmssink -d DEVICE, --device DEVICE Device name for camera input default: /dev/video2 -c CONNECTOR, --connector CONNECTOR Connector id to select output display default: 39 -u INDEX, --index INDEX Start index for multiple file input output default: 0 -f FPS, --fps FPS Framerate of gstreamer pipeline for image input default: 1 for display and video output 12 for image output -n, --no-curses Disable curses report default: Disabled
Change the default framerate (optional)
By default, the gstreamer pipeline runs with a 30/1 framerate. If the camera used does not support this framerate or if the framerate needs to changed, follow these steps:
1. Navigate to the python apps directory:
cd /opt/edge_ai_apps/apps_python
2. Open the utils.py file with any text editor and look for these lines:
if (source == 'camera'): source_cmd = 'v4l2src ' + \ ('device=' + args.device if args.device else '') source_cmd = source_cmd + ' io-mode=2 ! ' + \ 'image/jpeg, width=1280, height=720, framerate=30/1 ! ' + \ 'jpegdec !'
3. Add custom framerate support by modifying the code lines like so:
if (source == 'camera'): source_cmd = 'v4l2src ' + \ ('device=' + args.device if args.device else '') source_cmd = source_cmd + ' io-mode=2 ! ' + \ 'image/jpeg, width=1280, height=720, framerate=' + str(args.fps) + '/1 ! ' + \ 'jpegdec !''
IMPORTANT: since we are adding an option to change the default framerate, the demo has to be run with the -f option: |
./semantic_segmentation.py --device /dev/video0 -m ../models/segmentation/TVM-SS-569-fpnlite-aspp-regnetx400mf-ade20k32-384x384 -f 60 -o ./out/sem_%d.jpg
In the above example, the demo was ran with a framerate of 60 fps.