Usage and examples

From RidgeRun Developer Wiki







Note
Before running the demo, make sure your board is properly set up by following the Get Started guide.

Starting the Demo Application

If it is the first time you will run the demo, please start in step 1, if not, you can skip steps 1, 2, and 3 and start in step 4.

1. The first step is to get the demo source code, do it by running:

git clone https://github.com/RidgeRun/smart-seek-360.git

The content of the repository should look as follows:

.
├── config
│   ├── agent
│   │   ├── api_mapping.json
│   │   └── prompt.txt
│   ├── analytics
│   │   └── configuration.json
│   ├── demo-nginx.conf
│   └── vst
│       ├── vst_config.json
│       └── vst_storage.json
├── docker-compose.yaml
└── README.md

2. Update Ingress Configuration

cd smart-seek-360
sudo cp config/demo-nginx.conf /opt/nvidia/jetson/services/ingress/config/
Note
If you have previously added any ingress configurations, it is important to check that no routes are duplicated between any active ingress config files. This will cause the API Gateway to not run properly and dashboards and APIs may not be exposed properly.

3. Update VST Configuration

Note
You might want to do a backup of the content of /opt/nvidia/jetson/services/vst/config/ in case you want to go back to the original configuration later.
cd smart-seek-360
sudo cp config/vst/* /opt/nvidia/jetson/services/vst/config/


4. Launch Platform Services

sudo systemctl start jetson-redis
sudo systemctl start jetson-ingress
sudo systemctl start jetson-vst

5. Add at least one stream to VST. You can follow VST documentation for instructions on how to do it.

Note
The demo is designed to work with 360-degree spherical video. If you use a different format, the resulting video might look deformed.

6. Launch Smart Seek 360

cd smart-seek-360
docker compose up -d
Note

You can check the demo is up by running docker ps. The output should look like this:

CONTAINER ID   IMAGE                                            COMMAND                  CREATED       STATUS         PORTS     NAMES
d94669b6bba3   nvcr.io/nvidia/jps/vst:v1.2.58_aarch64           "sh -c '/root/vst_re…"   3 hours ago   Up 3 hours               vst
1e7985f4f66f   nvcr.io/nvidia/jps/ialpha-ingress-arm64v8:0.10   "sh -c '/nginx.sh 2>…"   3 hours ago   Up 3 hours               ingress
923ce878d778   redisfab/redistimeseries:master-arm64v8-jammy    "docker-entrypoint.s…"   3 hours ago   Up 3 hours               redis
6d3de9e48fc9   ridgerun/ai-agent-service                        "ai-agent --system_p…"   5 hours ago   Up 4 seconds             agent-service
e339561ada13   ridgerun/ptz-service                             "ptz --host 127.0.0.…"   5 hours ago   Up 4 seconds             ptz-service
a488ddbdd71b   ridgerun/analytics-service                       "analytics --config-…"   5 hours ago   Up 4 seconds             analytics-service
529b11d69113   ridgerun/detection-service                       "detection --horizon…"   5 hours ago   Up 4 seconds             detection-service
The first time you start the demo application, the ai-agent-service will take several minutes to initialize.

Demo Usage

If you followed the previous steps to start the application, you should have a stream available at rtsp://BOARD_IP:5021/ptz_out. You can open it from the host computer with the following command:

vlc rtsp://BOARD_IP:5021/ptz_out

Just replace BOARD_IP with the actual IP address of the board running the demo.

The demo is controlled through the AI-Agent which comes with a web interface that can be accessed at BOART_IP:30080/agent.

The page looks as follows:

Through that interface, the application can be controlled using natural language, the 2 available commands are: move camera and find objects.

Move Camera

The available options are:

  • move the camera X degrees left: This will move the camera X degrees specified to the left.
  • move the camera X degrees right: This will move the camera X degrees specified to the right.

Find Objects

With this feature, you can indicate the application to look for any object on the input stream and 2 actions will be performed.

1. The camera will point to that object once it is found.

2. A clip will be recorded of that event (disabled by default).

Note
You can start by typing "Find a dog". If there are any dogs in the scene, the camera will point to it.
Note
Both the camera movement and clip recording can be disabled via analytics-service configuration or API. Take a look at Analytics Service for more information.

Demo in action

The following video shows how to start and run the demo.