Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When saving checkpoint in multi-node training, using Zero3 optimization is model.safetensors file full model ? #3381

Closed
2 of 4 tasks
KeshavSingh29 opened this issue Feb 6, 2025 · 1 comment

Comments

@KeshavSingh29
Copy link

System Info

- `Accelerate` version: 1.0.0
- Platform: Linux-5.15.0-1061-nvidia-x86_64-with-glibc2.35
- `accelerate` bash location: /home/usr/miniconda3/envs/fa_clone/bin/accelerate
- Python version: 3.10.16
- Numpy version: 2.2.1
- PyTorch version (GPU?): 2.4.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 2015.56 GB
- GPU type: NVIDIA H100 80GB HBM3
- `Accelerate` default config:
        - compute_environment: LOCAL_MACHINE
        - distributed_type: DEEPSPEED
        - use_cpu: False
        - debug: False
        - num_processes: 32
        - machine_rank: 0
        - num_machines: 4
        - main_process_ip: 10.3.0.43
        - main_process_port: 56789
        - rdzv_backend: static
        - same_network: True
        - main_training_function: main
        - enable_cpu_affinity: False
        - deepspeed_config: {'deepspeed_config_file': 'config/ds_config.json', 'deepspeed_hostfile': 'config/hostfile.txt', 'deepspeed_multinode_launcher': 'pdsh', 'zero3_init_flag': True}
        - downcast_bf16: no
        - tpu_use_cluster: False
        - tpu_use_sudo: False
        - tpu_env: []
        - dynamo_config: {'dynamo_backend': 'AOT_EAGER'}

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
  • My own task or dataset (give details below)

Reproduction

Simple enough, training a LLM with multinode setup using deepspeed and accelerate.
This is my deepspeed config file config (Using Zero3 optimization).
I am saving the model checkpoints on a central storage (NAS) and no FDSP.

My question is:
When saving the model checkpoint, there is a default model.safetensors file. Is this the final model after the weights have been gathered from each node?

Usually I would just run zero_to_fp32.py file to create the final model but I want to know what exactly is the safetensors file created by default when saving the model.

{
    "zero_optimization": {
        "stage": 3,
        "round_robin_gradients": true,
        "stage3_gather_16bit_weights_on_model_save": true,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "offload_param": {
            "device": "cpu",
            "pin_memory": true
        },
        "overlap_comm": true,
        "reduce_scatter": true,
        "contiguous_gradients": true
    },
    "steps_per_print": 100,
    "wall_clock_breakdown": false,
    "activation_checkpointing": {
        "partition_activations": true,
        "cpu_checkpointing": false,
        "contiguous_memory_optimization": true,
        "number_checkpoints": null
    }
}

Expected behavior

No model.safetensor file.

@KeshavSingh29
Copy link
Author

Found out it was the final model. Probably accelerate creates the final model when saving the checkpoint as well.
What i did (maybe helpful for someone else):

  • Create a pytorch_model.bin using zero_to_fp32.py (did not use safetensor flag because I have tied weights of embedding and lm head which causes error when running with safetensor flag)
  • Load model using pytorch_model.bin and save model as model.safetensor
  • Compare the weights and keys using torch.allclose()

Only problem is that the weights change slightly at the floating point decimals 0.0001-0.0003 diff. I still dont know how much it will effect model performance but well, i can just use model.safetensor directly now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant