Skip to content

Dont-Care-Didnt-Ask/bitsandbytes-gpt2-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Finetuning GPT2-large with 8 bit optimizer

A demo of finetuning GPT2-large using 8-bit optimizer from bitsandbytes library.

How to run

  1. Download the notebook
  2. Upload it to your Kaggle account
  3. Pick GPU accelerator (it should give you a Tesla-P100 with 16 Gb VRAM)
  4. Run the notebook

Takeaway

TLDR — 8-bit optimizer reduces memory footprint from 14 Gb to 9.7 Gb, allowing, for example, training the model in Google colaboratory on Tesla K80.

About

A demo of 8-bit optimizers from the bitsandbytes library.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published