Xavier/Deep Learning/TensorRT/Parsing Caffe: Difference between revisions

From RidgeRun Developer Wiki
mNo edit summary
 
mNo edit summary
 
Line 1: Line 1:
<noinclude>
<noinclude>
{{Xavier/Head|previous=Deep Learning‎/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples|metakeywords=TensorRT,Parsing,Caffe model,Parsing Caffe model}}
{{Xavier/Head|previous=Deep Learning‎/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples}}
</noinclude>
</noinclude>



Latest revision as of 20:19, 25 October 2024



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples








Parsing Caffe model for TensorRT

The process for Caffe models is fairly similar to Tensorflow models. The key difference is that you don't need to generate a uff model file. Caffe model file (.caffemodel) can be imported directly from tensorrt.

Loading a Caffe model is an actual example provided by NVIDIA with TensorRT named sample_mnist. For more details on this example please refer to the C++ API section.



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples