-
Notifications
You must be signed in to change notification settings - Fork 287
Add part 2 of end-to-end tutorial: fine-tuning #2394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2394
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2c50f0a with merge base 8b12ddf ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
744d1f2
to
e104d85
Compare
thanks, the same image should probably be added to other tutorials as well |
docs/source/finetuning.rst
Outdated
3. Use our integratino with `Axolotl <https://github.com/axolotl-ai-cloud/axolotl>`__ | ||
|
||
|
||
Option 1: TorchAO QAT API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when do we want people use this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if people have their own fine-tuning frameworks they want to use, e.g. the Llama-3.2 1B/3B quantized release did not use torchtune or axolotl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think we can order these from most popular to least popular
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will put the integrations first since we want OSS users to use those first
Yeah I added the corresponding images in part 1 and part 3 |
e104d85
to
7da9916
Compare
7bcf5a6
to
65bb85c
Compare
This commit adds the QAT tutorial and a general structure for the fine-tuning tutorial, which all also include QLoRA and float8 quantized fine-tuning. It also connects the 3 tutorial parts (pre-training, fine-tuning, and serving) into one cohesive end-to-end flow with some visuals and text.
65bb85c
to
2c50f0a
Compare
This commit adds the QAT tutorial and a general structure for the fine-tuning tutorial, which all also include QLoRA and float8 quantized fine-tuning. It also connects the 3 tutorial parts (pre-training, fine-tuning, and serving) into one cohesive end-to-end flow with some visuals and text.
Preview it yourself here: https://docs-preview.pytorch.org/pytorch/ao/2394/finetuning.html