Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer dropout #544

Merged
merged 2 commits into from
Nov 14, 2018
Merged

Transformer dropout #544

merged 2 commits into from
Nov 14, 2018

Conversation

msperber
Copy link
Contributor

This adds missing dropout operations in 3 places in the self-attention/transformer architecture of example 21, namely after positional encodings, residual connections, and attention matrix (see AIAYN paper). I've confirmed that this improves results when setting these to a conservative value (e.g. 0.1).

@neubig neubig merged commit 3977c4b into master Nov 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants