Since there is this paper MiniLLM I do not see any problem of having an general framework for Knowledge Distillation. The only limitation is that 2 models should share the same Tokenizer, and maybe booth have to have the same CausalLM Denoising objective, but I thing it would be enough that they predict token by token.
-
Notifications
You must be signed in to change notification settings - Fork 0
License
n1o/LLMDistilery
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published