You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since there are many problems, specially in moltinode setting, I want to open this issue since I have spend a lot of time trying to debbug it.
If you use data tokenized with a tokenizer, let's call it X, and the input data you are passing to the model has been tokenized with a tokenizer Y, then the training will froze and never start!
Seems like an obvious thing, but no warning or error is displayed in the code and the execution will continue apparently running, despite the tokenizer X can have a different vocab size! An assert should be added since this can cause a lot of unused resources, and doing a pre-training it's not cheap at all.
The text was updated successfully, but these errors were encountered:
Since there are many problems, specially in moltinode setting, I want to open this issue since I have spend a lot of time trying to debbug it.
If you use data tokenized with a tokenizer, let's call it X, and the input data you are passing to the model has been tokenized with a tokenizer Y, then the training will froze and never start!
Seems like an obvious thing, but no warning or error is displayed in the code and the execution will continue apparently running, despite the tokenizer X can have a different vocab size! An assert should be added since this can cause a lot of unused resources, and doing a pre-training it's not cheap at all.
The text was updated successfully, but these errors were encountered: