NVIDIA
NVIDIA provides an OpenAI-compatible API at https://integrate.api.nvidia.com/v1 for Nemotron and NeMo models. Authenticate with an API key from NVIDIA NGC.
CLI setup
Export the key once, then run onboarding and set an NVIDIA model:
bashexport NVIDIA_API_KEY="nvapi-..." mayros onboard --auth-choice skip mayros models set nvidia/nvidia/llama-3.1-nemotron-70b-instruct
If you still pass --token, remember it lands in shell history and ps output; prefer the env var when possible.
Config snippet
json5{ env: { NVIDIA_API_KEY: "nvapi-..." }, models: { providers: { nvidia: { baseUrl: "https://integrate.api.nvidia.com/v1", api: "openai-completions", }, }, }, agents: { defaults: { model: { primary: "nvidia/nvidia/llama-3.1-nemotron-70b-instruct" }, }, }, }
Model IDs
nvidia/llama-3.1-nemotron-70b-instruct(default)meta/llama-3.3-70b-instructnvidia/mistral-nemo-minitron-8b-8k-instruct
Notes
- OpenAI-compatible
/v1endpoint; use an API key from NVIDIA NGC. - Provider auto-enables when
NVIDIA_API_KEYis set; uses static defaults (131,072-token context window, 4,096 max tokens).