Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions on scripts and runtime #68

Open
kevin3567 opened this issue Aug 29, 2024 · 1 comment
Open

Some questions on scripts and runtime #68

kevin3567 opened this issue Aug 29, 2024 · 1 comment

Comments

@kevin3567
Copy link

Hi,

I have a few questions about the scripts and runtime:

  1. What is the execution time of your experiment when using 112 A100 GPUs?
  2. I saw the script scripts/cpt/fpt.sh, which uses 1 node with 8 GPUs. Is this for pretraining a llama-moe as well? If so, what is the execution time of this experiment?
  3. I am wondering if there are any way to run the pretraining on 2 (or even 1) GPUs for proof-of-concept purposes. Reducing the architecture size is probably the first thing that should be tried, but I am wondering if you have any experience with model pretraining on a low-resources settings.

Thanks in advance.

@Spico197
Copy link
Collaborator

Spico197 commented Oct 8, 2024

Hi there, sorry for the late response. Thank you very much for your attention to our project ❤️

  1. For LLaMA-MoE-3.5B (2/8), it costs about 1 week to reproduce the experiment with 112 A100 GPUs.
  2. It is used for testing purposes and does not reproduce the results reported in the report.
  3. In this case, I would recommend testing on smaller LLMs, maybe SmolLM series is a good choice for your convenience. In that case, you don't have to train on 200B tokens, and the training time would be highly reduced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants