Skip to content

Add getter functions for TLM defaults #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
May 4, 2025
Merged

Conversation

AshishSardana
Copy link
Contributor

Key Info

  • Implementation plan: link
  • Priority:

What changed?

What do you want the reviewer(s) to focus on?


Checklist

  • Did you link the GitHub issue?
  • Did you follow deployment steps or bump the version if needed?
  • Did you add/update tests?
  • What QA did you do?
    • Tested...

@jwmueller jwmueller requested review from jwmueller May 2, 2025 01:50
Copy link
Member

@jwmueller jwmueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs more tests

@AshishSardana
Copy link
Contributor Author

A test, unrelated to this PR, is failing.

FAILED tests/test_validation.py::test_custom_eval_criteria_validation - AssertionError: Regex pattern did not match.
 Regex: "^Invalid keys {'extra'} found in custom_eval_criteria item 0. Supported keys are: {'name', 'criteria'}.$"
 Input: "Invalid keys {'extra'} found in custom_eval_criteria item 0. Supported keys are: {'criteria', 'name'}."

It's because of the ordering in {'criteria', 'name'}. Let me know if this is expected (and should let it be), or if I should update this test to work independent of the ordering.

@jwmueller
Copy link
Member

A test, unrelated to this PR, is failing.
It's because of the ordering in {'criteria', 'name'}. Let me know if this is expected (and should let it be), or if I should update this test to work independent of the ordering.

You can ignore, unless you have clear idea of how to fix it yourself

response = result["response"]
assert isinstance(response, str)

enc = tiktoken.encoding_for_model(get_default_model())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you confirm this code will still work:

s = “gpt-4.1-mini”
enc = tiktoken.encoding_for_model(get_default_model(s))

because we're going to shortly be upgrading the default model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code will not work at the moment because of a pending PR openai/tiktoken#396

Though, its confirmed that 4.1-* uses the same tokenizer as 4o here

When we upgrade to 4.1, we can spoof / hardcode to fetch gpt-4o tokenizer (just setting a variable) until the above PR gets merged.

Let me know if you want me to do this as part of this PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep I made the suggestion in the code, if you can fix it up / test it works still

Copy link
Member

@jwmueller jwmueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jwmueller jwmueller merged commit 4eff3b5 into main May 4, 2025
2 of 3 checks passed
@jwmueller jwmueller deleted the asardana/get-default branch May 4, 2025 02:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants