-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to convert bitnet model? looks like the script is broken #81
Comments
https://github.com/kaleid-liner/llama.cpp/tree/fde57d01c01d1cb5f97b4df9780c756e672976d9 |
Because i want to test bitnet 3b model, so firstly i need to convert model to
|
@nigelzzzzz You can try running |
Hi @QingtaoLi1 , |
@nigelzzzzz Thank you, but I don't quite understand what do you mean by "fix the type". The current pipeline should work if you use the correct command. |
Hi
I using the steps in readme.md. In steps 3, i can't find
convert-hf-to-gguf-t-mac.py
in 3party/llama.cpp/....but i can find the file in master branch. can i know which step i must follow. thanks
The text was updated successfully, but these errors were encountered: