AI Based Object Redaction/Examples/Library Examples: Difference between revisions

Line 67: Line 67:
</syntaxhighlight>
</syntaxhighlight>


With the gpu backend create an input and output buffers in GPU memory with the input video/image resolution and format. Also for the AI model to work properly create a GPU memory buffer with the supported resolution and format.
With the gpu backend create an input and output buffers in GPU memory with the input video/image resolution and format.


<syntaxhighlight lang=cpp>
<syntaxhighlight lang=cpp>
std::shared_ptr<rd::io::IBuffer> input_gpu = backend->getBuffer(input_resolution, input_format);
std::shared_ptr<rd::io::IBuffer> input_gpu = backend->getBuffer(input_resolution, input_format);
std::shared_ptr<rd::io::IBuffer> output = backend->getBuffer(input_resolution, input_format);
std::shared_ptr<rd::io::IBuffer> output = backend->getBuffer(input_resolution, input_format);
std::shared_ptr<rd::io::IBuffer> input_convert = backend->getBuffer(convert_resolution, format);
</syntaxhighlight>
</syntaxhighlight>


The CPU input buffer must be allocated to GPU memory when using GPU, to accomplish the allocation use the <code>copyFromHost</code> method to upload the input buffer to GPU memory.
The CPU input buffer must be moved to GPU memory when using GPU, to accomplish this, use the <code>copyFromHost</code> method to upload the input buffer to GPU memory.


<syntaxhighlight lang=cpp>
<syntaxhighlight lang=cpp>