Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More flexibility in parameters for OpenAI / LiteLLM #544

Open
satpalsr opened this issue Feb 7, 2025 · 3 comments
Open

More flexibility in parameters for OpenAI / LiteLLM #544

satpalsr opened this issue Feb 7, 2025 · 3 comments
Labels
feature request New feature/request

Comments

@satpalsr
Copy link

satpalsr commented Feb 7, 2025

  1. Why not expose entire LiteLLM config as parameter or something when running evals?
  2. I'm actually looking to run open-r1 evals with any api provider. This don't seem possible with current. The OpenAIClient class should also allow using any base_url. Parameter like temperature should be available to modify directly.

Image

@satpalsr satpalsr added the feature request New feature/request label Feb 7, 2025
@gary149
Copy link
Contributor

gary149 commented Feb 11, 2025

+1 also it would be great to have examples of usage with openai endpoints and LiteLLM.

@julien-c
Copy link
Member

and Inference Providers on the Hub

@lhl
Copy link

lhl commented Feb 16, 2025

I saw @NathanHB and @gary149 just committed some improvements so that various configuration options can be submitted via a YAML file (gj!), which is workable, but I think that proper flags or a way to submit via command line would still be useful for building eval pipelines (vs having to add YAML file generation into my scripts, which is a big bloat/PITA).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature/request
Projects
None yet
Development

No branches or pull requests

4 participants