Jump to content

Metropolis Microservices/RidgeRun Services/AI Agent: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 21: Line 21:


The commands that can understand the LLM and the API calls depends of the configuration provided, check the demo for a use case example.
The commands that can understand the LLM and the API calls depends of the configuration provided, check the demo for a use case example.
{{Colored box|background-title-color=#6586B9|title-color=#FFFFFF|title='''TL;DR'''|icon=notice-icon-white.png
  |style=overflow:hidden;
  |content=
You can run this microservice with the following command:
<syntaxhighlight lang="bash">
docker run --runtime nvidia -it  --network host --volume /home/nvidia/config:/ai-agent-config --name agent-service  ridgerun/ai-agent-service:latest ai-agent --system_prompt ai-agent-config/prompt.txt --api_map ai-agent-config/api_mapping.json
</syntaxhighlight>
This will run PTZ in port 5010
}}
{{Colored box|background-title-color=#0086B9|title-color=#FFFFFF|title='''Service Documentation'''|icon=notice-icon-white.png
  |style=overflow:hidden;
  |content=
* [http://ai-agent-open-ridgerun-microservices-1bbec51d1727c915f635846597.pages.ridgerun.com/ Service Documentation]
* [http://ai-agent-open-ridgerun-microservices-1bbec51d1727c915f635846597.pages.ridgerun.com/api.html Api Documentation]
}}


=== Prompt Format ===
=== Prompt Format ===
Cookies help us deliver our services. By using our services, you agree to our use of cookies.