NVIDIA GTC 2024 360 VR demo installation

From RidgeRun Developer Wiki


Previous: Using the Demo Index Next: Using the Demo/Usage







This demo has been containerized for easy replication. Below are the steps to achieve the same outcome by loading a prebuilt container making it easy to replicate the demo.

Installation

For the installation, you can contact us to get your docker container for evaluation purposes or you can take your approach by installing the repository in your platform. Once you get your container you can follow the next steps to set up and execute the demo.

Docker installation

The docker provided can be installed as follows.

For more details of the container used for the demo, you can follow the RidgeRun Wiki Guide on Docker Images with Demos and Products Evaluation Versions for a general overview of the container.

Load the image

This demo has already been tested for JetPack 5. x and Deepstream 6.2. RidgeRun will provide you the image with the evaluation version of this product for a timestamp of 2 weeks. If you want to get the complete version, please follow the RidgeRun Store Link

The image provided will be compressed with the next name.

360-video-demo.tar.gz

Load the image as following

docker load 360-video-demo.tar.gz

You can check the image loading was successful as follows.

docker images

Expected output would be

306-video-demo            latest            c9c30cfdfb34   2 seconds     20.8GB

Create the container

Now you can create your container with your deepstream version. Take for example the version already tested (deepstream-6.2). You can change it with your own version.

DEEP_PATH=/opt/nvidia/deepstream/deepstream-6.2

DEEP_LIB_PATH=/opt/nvidia/deepstream/deepstream-6.2/lib/libnvdsgst_customhelper.so

Now you can create the container with the next command.

docker run --name 360-demo --net=host --runtime nvidia --ipc=host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --privileged -v /dev:/dev -v /proc:/writable_proc --volume $DEEP_PATH:$DEEP_PATH -w /360-video-demo -v  /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/:/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/ -v $DEEP_LIB_PATH:$DEEP_LIB_PATH -e DISPLAY=$DISPLAY -device /dev/video0:/dev/video0 --device /dev/video1:/dev/video1 -e LD_LIBRARY_PATH=$DEEP_PATH/lib -it 360-video-demo /RR_EVALS/demo.sh

After the container is created, a bash of the container will open inside the 360-video-demo folder as follows.

(rr-eval)971606c46b2f:/360-video-demo#

Execute the demo

Before executing the demo, make sure that you are inside the 360-video-demo folder in the bash of the container. You can check it with the following command.

pwd

The expected output would be.

/360-video/demo

With "ls" command, you can find an executable called 360-video-demo.sh.

Once you execute the eval you can run the demo as follows.

./360-video-demo.sh

Then you can follow the URL printed in the console and test the demo.

 * Serving Flask app 'server'
 * Debug mode: on
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5550
 * Running on '''http://192.168.0.154:5550'''
INFO:werkzeug:Press CTRL+C to quit
INFO:werkzeug: * Restarting with stat

From the output, you can copy the URL (http://192.168.0.154:5550, as an example) in any web navigator.

Stoping and starting container

If you want to stop the demo, you can do it by running the next command.

exit

Whenever you want to access again the already created container, you can do so by running the following command.

docker start 360-video-demo

Afterward, you can access the container as follows.

docker attach 360-video-demo

Now you can execute again the demo as mentioned in the Execute the demo section.

Generative AI set up

Demo version with Docker containers

First, you will need to have NVIDIA runtime in your Jetson platform. You can follow the details to set up the container in the section above

In this step make sure you have enough memory or follow the steps to setup the environment with an additional NVME.

In the following path

cd /orin_ssd

Copy or clone the repository of the demo in the new path.

git clone https://gitlab.ridgerun.com/open/nvidia-gtc24-360-vr-demo.git

Execute the following command to download and set both containers (including the container provided with RR products and the container of the Generative AI Agent).

cd /orin_ssd/nvidia-gtc24-360-vr-demo/docker/
sudo docker compose --env-file default.env up

Enjoy the demo.

Complete version setup

Here you will find how to set it up in your own HW the demo, please note that you will need the purchased version of GstCUDA, GstStitcher, LibObjectRedaction, and GstProjector.

# Download jetson-containers
git clone https://github.com/dusty-nv/jetson-containers.git
  • Download demo repository:
cd jetson-containers/data
git clone https://gitlab.ridgerun.com/open/nvidia-gtc24-360-vr-demo.git
git checkout develop
  • In one terminal, run the demo:
cd /orin_ssd/jetson-containers/data/nvidia-gtc24-360-vr-demo
GST_DEBUG=*perf*:6 python3 __init__.py
  • In a second terminal. Run the container with LLava model. This command will mount a volume in the data directory inside jetson-containers
cd /orin_ssd/jetson-containers
/run.sh $(./autotag llava)
  • Go to the repository and then inside :
cd data
git clone https://gitlab.ridgerun.com/ridgerun/rnd/gtc-genai.git
cd gtc-genai
git checkout feature/add-basic-agent
  • Run Gen AI agent:
#python3 gtc.py --model-path liuhaotian/llava-v1.5-7b --image-file [PATH WHERE THE DEMO IS SAVING THE SNAPSHOT]
#python3 gtc.py --model-path liuhaotian/llava-v1.5-7b --image-file /data/nvidia-gtc24-360-vr-demo/static/img/snapshot.jpg
python3 gtc.py --model-path SaffalPoosh/llava-llama-2-7B-merged --image-file /data/nvidia-gtc24-360-vr-demo/static/img/snapshot.jpg
#python3 gtc.py --model-path liuhaotian/llava-v1.6-vicuna-7b --image-file /data/nvidia-gtc24-360-vr-demo/static/img/snapshot.jpg
  • You can now open the demo webpage in http://[ORIN IP]:5550 and control the demo, including the Gen AI agent

If the board is already configured

  • In one terminal, run the demo:
cd /orin_ssd/jetson-containers/data/nvidia-gtc24-360-vr-demo
GST_DEBUG=*perf*:6 python3 __init__.py
  • In a second terminal, run Gen AI service
cd /orin_ssd/jetson-containers/
./run.sh $(./autotag llava) python3 /data/gtc-genai/gtc.py --model-path SaffalPoosh/llava-llama-2-7B-merged --image-file /data/nvidia-gtc24-360-vr-demo/static/img/snapshot.jpg

Debugging tips

  • We assume the nvidia demo is saving snapshots at /data/nvidia-gtc24-360-vr-demo/static/img/snapshot.jpg. Double check the path is correct, otherwise the Gen AI agent won't properly work
  • If the demo crashes, restart it and restart the Argus daemon:
sudo systemctl start nvargus-daemon
  • If the issue persists, restart the board



Previous: Using the Demo Index Next: Using the Demo/Usage