-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to recover stage quantization from finetuning stage after an error #957
Comments
Hi @jiangjiadi if you update your recipe with the remaining quantization stage and start with the finetuned model, you should be able to run quantization without repeating the other stages. Did you run into an issue while loading the finetuned model? |
@dsikka When I start with the The recipe I used only contains quantization_stage:
|
Hi @jiangjiadi if you update to use transformers main or the latest release, this should fix the issue. |
@dsikka I upgraded the version of transformers to 4.47.0 and found that the model fails to load the weights. |
Hi @jiangjiadi can you update your transformers to main and also share the recipe you applied and what the final config of the model looks like? |
After executing
examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py
, I obtained three models at different stages:stage_sparsity
,stage_finetuning
, andstage_quantization
. My question is, if an error occurs while running thestage_quantization
phase, how can I resume the process from thestage_finetuning
model? Given that both thestage_sparsity
andstage_finetuning
phases are resource-intensive, restarting from the beginning would be inefficient and time-consuming.The text was updated successfully, but these errors were encountered: