Create an account to get started. You can set up an org for your team later.
Credits can be used with any model or provider.
Create an API key and start making requests. Fully OpenAI compatible.
Access all major models through a single, unified interface. OpenAI SDK works out of the box.
Reliable AI models via our distributed infrastructure. Fall back to other providers when one goes down.
Keep costs in check without sacrificing speed. OpenRouter runs at the edge, adding just ~25ms between your users and their inference.
Protect your organization with fine grained data policies. Ensure prompts only go to the models and providers you trust.
146.6B
Tokens/wk
2.5s
Latency
-6.15%
Weekly growth
146.6B
Tokens/wk
2.5s
Latency
-6.15%
Weekly growth
20.1B
Tokens/wk
2.2s
Latency
--
Weekly growth
323.0B
Tokens/wk
2.1s
Latency
+5.82%
Weekly growth
Venice joins OpenRouter as a new provider, offering its flagship uncensored model for private, powerful, and non-restrictive AI.
Explore it in action: See how effortlessly you can run Kimi K2 in Cursor using OpenRouter’s flexible model routing.
Introducing new models to sustain accessible AI inference.