Replies: 9 comments 3 replies
-
Venice.AI access to private, open source AI models |
Beta Was this translation helpful? Give feedback.
-
Can you add Requesty? |
Beta Was this translation helpful? Give feedback.
-
All of the above, and lmstudio. Right now we can use lmstudio through the open ai compatible format but it’s a bit inconsistent, being able to use lmstudio as a direct provider would be awesome. |
Beta Was this translation helpful? Give feedback.
-
Nebius AI Studio, an GPU cloud service with open-weight models like DeepSeek |
Beta Was this translation helpful? Give feedback.
-
All of the above for adding to the defaults already listed. Gemini is definitely nice and easy to get started for free, but Groq has much stricter limits. For solid free options that aren't overly strict on rate limits, I like Mistral, Hugging Face, and Github Models. Hugging Face with Qwen Coder 32B for free would be nice : https://huggingface.co/docs/api-inference/en/getting-started Mistral Large can also handle most basic coding tasks in my experience: https://docs.mistral.ai/api/#tag/chat/operation/chat_completion_v1_chat_completions_post Github Models gives you access to free GPT 4o, 4o-mini, and other models: https://docs.github.com/en/github-models/prototyping-with-ai-models |
Beta Was this translation helpful? Give feedback.
-
Does https://replicate.com/ fall into LLM provider category? This might be great addition as they have pay per use. |
Beta Was this translation helpful? Give feedback.
-
I would really like to see generic OpenAI support. This would give access to basically any model we want (local and online) through services like Ollama and OpenRouter. |
Beta Was this translation helpful? Give feedback.
-
Thank you for all your suggestions, everyone. Will keep them in mind. ❤️ |
Beta Was this translation helpful? Give feedback.
-
There's been quite a few requests for new providers.
20 votes ·
Beta Was this translation helpful? Give feedback.
All reactions