Jump to content

Tritonserver support for NVIDIA Jetson Platforms: Difference between revisions

m
no edit summary
mNo edit summary
 
mNo edit summary
Line 1: Line 1:
<seo title="Triton Inference Server support | Triton Server | NVIDIA Jetson | RidgeRun" titlemode="replace" keywords="GStreamer, Linux SDK, Linux BSP,  Embedded Linux, Device Drivers, NVIDIA, Jetson, TX1, TX2, Jetson AGX Xavier, Jetson TX1, Jetson TX2, Embedded Linux driver development, Linux Software development, Embedded Linux SDK, Embedded Linux Application development, GStreamer Multimedia Framework, Xavier, AI, Deep Learning, Tritonserver, Triton server, Triton, Tritonserver support, JetPack 4.6, Triton client, Tritonserver deployment, Triton Inference Server, Kubernetes, kubectl, kubelet, kubeadm, docker, NVIDIA Triton Inference Server" description="This RidgeRun developer wiki guide is about Tritonserver, how to run and test the server on a NVIDIA JetPack."></seo>
{{#seo:
 
|title=Tritonserver support for NVIDIA Jetson Platforms
<table>
|title_mode=replace
<tr>
|description={{{description|This RidgeRun developer wiki guide is about Tritonserver, how to run and test the server on a NVIDIA JetPack.}}}
<td><div class="clear; float:right">__TOC__</div></td>
}}
<td>
{{NVIDIA Pref Partner logo and RR Contact}}
{{NVIDIA Preferred Partner logo}}
<td>
<center>
{{ContactUs Button}}
</center>
</tr>
</table>


== Introduction to Triton Inference Server==
== Introduction to Triton Inference Server==
Line 131: Line 124:
If you search the documentation, you will find that there are docker images on NGC for tritonserver, named <code>nvcr.io/nvidia/tritonserver:<xx.yy>-py3</code>. These images don't work for JetPack because are built for Windows or Ubuntu for PC, therefore you should need to create one by using the following Dockerfile.  
If you search the documentation, you will find that there are docker images on NGC for tritonserver, named <code>nvcr.io/nvidia/tritonserver:<xx.yy>-py3</code>. These images don't work for JetPack because are built for Windows or Ubuntu for PC, therefore you should need to create one by using the following Dockerfile.  


<syntaxhighlight lang=docker>
<syntaxhighlight lang="docker">


FROM nvcr.io/nvidia/l4t-ml:r32.6.1-py3
FROM nvcr.io/nvidia/l4t-ml:r32.6.1-py3
Line 303: Line 296:
This is done using the following deployment YAML file description for kubernetes:
This is done using the following deployment YAML file description for kubernetes:


<syntaxhighlight lang=yaml>
<syntaxhighlight lang="yaml">


apiVersion: apps/v1
apiVersion: apps/v1
Cookies help us deliver our services. By using our services, you agree to our use of cookies.