Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LoRA fine-tuning demo notebook #104

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

andrefoo
Copy link

@andrefoo andrefoo commented Feb 4, 2025

Description

This PR adds a basic Jupyter notebook demonstrating how to perform LoRA (Low-Rank Adaptation) fine-tuning using Fireworks AI. The notebook provides a step-by-step guide for customizing language models efficiently using customer support data as an example.

Features

  • Complete setup instructions including prerequisites
  • Dataset preparation and formatting
  • LoRA fine-tuning with both default and custom parameters
  • Model deployment and testing
  • Resource cleanup guidance

Key Components

  • Installation and credential setup
  • Dataset loading and formatting for Fireworks' API
  • Fine-tuning job creation and monitoring
  • Model deployment and inference testing
  • Resource management (deployment/undeployment)

Technical Details

  • Uses llama-v3p1-8b-instruct as the base model
  • Demonstrates LoRA parameter configuration (rank, learning rate, epochs)
  • Includes error handling and troubleshooting guidance
  • Compatible with Google Colab environment

Testing Done

  • ✅ Notebook runs end-to-end in Google Colab
  • ✅ All code cells execute successfully
  • ✅ Resource cleanup steps verified

Documentation

  • Added detailed comments and explanations throughout the notebook
  • Included links to relevant documentation
  • Added parameter explanations and best practices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant