Skip to content

Conversation

@copybara-service
Copy link

rlax: Upstream Muesli utilities to rlax.

We now provide methods for constructing the clipped MPO (CMPO) policy targets used as part of the Muesli agent loss. These CMPO targets are in expectation proportional to: prior(a|s) * exp(clip(norm(Q(s, a)))) where the prior is computed by the actor policy head, and the Q values are computed using the learned model's reward and value heads.

See "Muesli: Combining Improvements in Policy Optimization" by Hessel et al. (https://arxiv.org/pdf/2104.06159.pdf) for more details.

@copybara-service copybara-service bot force-pushed the test_493987878 branch 3 times, most recently from d8a2e74 to fe8b3f5 Compare December 13, 2022 20:03
We now provide methods for constructing the clipped MPO (CMPO) policy targets used as part of the Muesli agent loss. These CMPO targets are in expectation proportional to: `prior(a|s) * exp(clip(norm(Q(s, a))))` where the prior is computed by the actor policy head, and the Q values are computed using the learned model's reward and value heads.

See "Muesli: Combining Improvements in Policy Optimization" by Hessel et al. (https://arxiv.org/pdf/2104.06159.pdf) for more details.

PiperOrigin-RevId: 493987878
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant