Image Stitching for NVIDIA Jetson/Examples/Other pipelines: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
 
(14 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=Examples/Using Gstd|next=Performance|keywords=Image Stitching, CUDA, Stitcher, Panorama}}
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=Examples/Using Gstd|next=Performance|metakeywords=Image Stitching, CUDA, Stitcher, Panorama}}
</noinclude>
</noinclude>


{{DISPLAYTITLE:Stitcher element few more example pipelines|noerror}}


<table>
<table>
Line 56: Line 57:


OUTVIDEO=/tmp/stitching_result.mp4
OUTVIDEO=/tmp/stitching_result.mp4
BORDER_WIDTH=10


gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
Line 69: Line 68:
=== Displaying a stitch ===
=== Displaying a stitch ===
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10


gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
   homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
   homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
  border-width=$BORDER_WIDTH \
   nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
   nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
   nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
   nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
   nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
   nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
   stitcher. ! queue ! nvvidconv ! nvoverlaysink
   stitcher. ! queue ! nvvidconv ! xvimagesink
</syntaxhighlight>
</syntaxhighlight>


Line 84: Line 81:


<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10


gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
stitcher. ! fakesink
stitcher. ! fakesink
</syntaxhighlight>
</syntaxhighlight>


=== Streaming via UDP+RTP ===
=== Streaming via UDP+RTP ===
Set the HOST variable to the Receiver's IP
Set the HOST variable to the Receiver's IP
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10
HOST=127.0.0.1
HOST=127.0.0.1
PORT=12345
PORT=12345
Line 104: Line 97:
# Sender
# Sender
gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
Line 123: Line 115:
==== Example pipeline ====  
==== Example pipeline ====  
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10


INPUT_0=video_0.mp4
INPUT_0=video_0.mp4
Line 133: Line 124:
gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
Line 143: Line 133:
==== Example pipeline for x86 ====  
==== Example pipeline for x86 ====  
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10


INPUT_0=video_0.mp4
INPUT_0=video_0.mp4
Line 153: Line 142:
gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT
stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT
</syntaxhighlight>
== Stitching images ==
=== Saving a stitch from two JPEG images ===
==== Example pipeline ====
<syntaxhighlight lang=bash>
INPUT_0=image_0.jpeg
INPUT_1=image_1.jpeg
OUTPUT=/tmp/stitching_result.jpeg
gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_0 \
  filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT
</syntaxhighlight>
==== Example pipeline for x86 ====
<syntaxhighlight lang=bash>
INPUT_0=image_0.jpeg
INPUT_1=image_1.jpeg
OUTPUT=/tmp/stitching_result.jpeg
gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_0 \
  filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT


</syntaxhighlight>
</syntaxhighlight>
Line 165: Line 190:
=== Generating a MP4 stitch from 3 GRAY8 cameras ===
=== Generating a MP4 stitch from 3 GRAY8 cameras ===
<syntaxhighlight lang=bash>
<syntaxhighlight lang=bash>
BORDER_WIDTH=10
OUTVIDEO=/tmp/stitching_result.mp4
OUTVIDEO=/tmp/stitching_result.mp4


gst-launch-1.0 -e cudastitcher name=stitcher \
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \
Line 176: Line 199:
stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO
</syntaxhighlight>
</syntaxhighlight>


== Using distorted inputs ==
== Using distorted inputs ==
Line 182: Line 204:


The [[CUDA_Accelerated_GStreamer_Camera_Undistort/Examples | Undistort examples]] wiki shows some basic usage examples. Visit [[CUDA_Accelerated_GStreamer_Camera_Undistort/Getting_Started/Getting_the_code | Getting Started-Getting the code]] to learn more about the element and how to calibrate it.
The [[CUDA_Accelerated_GStreamer_Camera_Undistort/Examples | Undistort examples]] wiki shows some basic usage examples. Visit [[CUDA_Accelerated_GStreamer_Camera_Undistort/Getting_Started/Getting_the_code | Getting Started-Getting the code]] to learn more about the element and how to calibrate it.
== 360 video stitching ==
360 video stitching can be applied using the RidgeRun Projector plug-in by applying projections to the cameras like equirectangular projections.
The [[RidgeRun_Image_Projector/RidgeRun_Image_Projector/Examples/GStreamer_pipelines | Projector examples]] wiki shows some basic usage examples. Visit the [[RidgeRun_Image_Projector | RidgeRun Image Projector]] wiki to learn more about the plug-in.


<noinclude>
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Examples/Using Gstd|Performance}}
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Examples/Using Gstd|Performance}}
</noinclude>
</noinclude>

Latest revision as of 19:24, 13 February 2024



Previous: Examples/Using Gstd Index Next: Performance






Problems running the pipelines shown on this page? Please see our GStreamer Debugging guide for help.

This page showcases basic usage examples of the cudastitcher element, most of these pipelines can be automatically generated by this Pipeline generator Tool.

The homography list is stored in the homographies.json file and contains N-1 homographies for N images, for more information on how to set these values, visit Controlling the Stitcher Wiki page.

For all of the examples below assume that there are three inputs and the homographies file looks like this:

{
    "homographies":[
        {
            "images":{
                "target":0,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": -510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        },
        {
            "images":{
                "target":2,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}


The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them.

The perf element is used in some of the examples, it can be downloaded from RidgeRun Git repository; otherwise, the element can be removed from the pipeline without any issues. Also, in case of encountering performance issues, consider executing the /usr/bin/jetson_clocks binary.

Stitching from cameras

Saving a stitch to MP4


OUTVIDEO=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO

Displaying a stitch


gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
  nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
  stitcher. ! queue ! nvvidconv ! xvimagesink

Dumping output to fakesink

This option is particularly useful for debugging


gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
stitcher. ! fakesink

Streaming via UDP+RTP

Set the HOST variable to the Receiver's IP

HOST=127.0.0.1
PORT=12345

# Sender
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d "\t" | tr -d " "`" \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \
stitcher. ! nvvidconv ! nvv4l2h264enc ! rtph264pay config-interval=10  ! queue ! udpsink host=$HOST port=$PORT
# Receiver
gst-launch-1.0 udpsrc port=$PORT ! 'application/x-rtp, media=(string)video, encoding-name=(string)H264' !  queue ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink

Stitching videos

Saving a stitch from three MP4 videos

Example pipeline


INPUT_0=video_0.mp4
INPUT_1=video_1.mp4
INPUT_2=video_2.mp4

OUTPUT=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTPUT

Example pipeline for x86


INPUT_0=video_0.mp4
INPUT_1=video_1.mp4
INPUT_2=video_2.mp4

OUTPUT=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT

Stitching images

Saving a stitch from two JPEG images

Example pipeline


INPUT_0=image_0.jpeg
INPUT_1=image_1.jpeg

OUTPUT=/tmp/stitching_result.jpeg

gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_0 \
  filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! nvvidconv ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT

Example pipeline for x86


INPUT_0=image_0.jpeg
INPUT_1=image_1.jpeg

OUTPUT=/tmp/stitching_result.jpeg

gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  filesrc location=$INPUT_0 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_0 \
  filesrc location=$INPUT_1 ! jpegparse ! jpegdec ! videoconvert ! queue ! stitcher.sink_1 \
  stitcher. ! queue ! videoconvert ! jpegenc ! filesink location=$OUTPUT

Specifying a format

Generating a MP4 stitch from 3 GRAY8 cameras

OUTVIDEO=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO

Using distorted inputs

Lens distortion correction can be applied to the stitcher with the undistort element.

The Undistort examples wiki shows some basic usage examples. Visit Getting Started-Getting the code to learn more about the element and how to calibrate it.

360 video stitching

360 video stitching can be applied using the RidgeRun Projector plug-in by applying projections to the cameras like equirectangular projections.

The Projector examples wiki shows some basic usage examples. Visit the RidgeRun Image Projector wiki to learn more about the plug-in.


Previous: Examples/Using Gstd Index Next: Performance