Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add cliprange to GRPO loss #2739

Closed
wants to merge 2 commits into from
Closed

Conversation

joey00072
Copy link
Contributor

Updates the loss calculation to use a clipped version of the policy ratio, similar to PPO's clipping mechanism.

What does this PR do?

Adds clipping objective to GRPO loss
Screenshot 2025-02-02 at 6 05 26 PM
https://arxiv.org/pdf/2402.03300 eq 3 page 12
While the current loss carries the spirit of the ppo trust region objective, it is not the same as described in the paper.

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@qgallouedec @lewtun

joey00072 and others added 2 commits February 2, 2025 18:10
Updates the loss calculation to use a clipped version of the policy ratio, similar to PPO's clipping mechanism.
@qgallouedec
Copy link
Member

We are only doing one update so there is no need to clip

@joey00072
Copy link
Contributor Author

https://x.com/shxf0072/status/1886407737023865012

I was trying to run in gpu limited env,
where we do some 128 generation and 128 train step,

its become wrong then

ppl look up hf for reference implementation,
i don't think making this assumption are good

anyways can we atleast add this link #2608
in comment above loss

@qgallouedec
Copy link
Member

It's not wrong since you can't do multiple optimization steps with the current implementation.
We may allow this in the future, and if so, we'll add the clipping.
Feel free to suggest a comment if you feel that clarity is missing here. Thanks!

@winglian
Copy link
Contributor

winglian commented Feb 4, 2025

@qgallouedec Would you consider moving the per_token_loss logic to it's own method? would at least make it easier to extend for those wanting to do so without having to duplicate the entire compute_loss block.

@tyler-romero
Copy link
Contributor

+1 for allowing multiple optimization steps per rollout, similar to how the Deepseekmath paper describes GRPO and similar to how PPO is implemented in practice.

@qgallouedec
Copy link
Member

@qgallouedec Would you consider moving the per_token_loss logic to it's own method? would at least make it easier to extend for those wanting to do so without having to duplicate the entire compute_loss block.

Yes, done in #2762

@aiyinyuedejustin
Copy link

page 13 bro

@joey00072
Copy link
Contributor Author

closing this #2899,
thanks @qgallouedec
(sorry forgot about this😅)

@joey00072 joey00072 closed this Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants