Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove FP16_Optimizer patch for DeepSpeed #2213

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Rohan138
Copy link
Contributor

@Rohan138 Rohan138 commented Mar 11, 2025

Deepspeed (since 2022) already includes the FusedAdam FP16_Optimizer originally from NVIDIA/apex here (and it's better maintained wrt deepspeed grad_norm, MOE training, etc): https://github.com/deepspeedai/DeepSpeed/blob/master/deepspeed/runtime/fp16/fused_optimizer.py

Currently this line gives a warning saying:

/opt/conda/envs/py_3.10/lib/python3.10/site-packages/onnxruntime/training/optim/_modifier_registry.py:56: UserWarning: Skip modifying optimizer because of optimizer name not found in the registry: accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper

which is actually caused by get_full_qualified_type_name(optimizer.optimizer) = 'deepspeed.runtime.fp16.fused_optimizer.FP16_Optimizer' not being in the onnxruntime registry: https://github.com/microsoft/onnxruntime/blob/main/orttraining/orttraining/python/training/optim/_modifier_registry.py

i.e. the FP16 Optimizer from onnxruntime (https://github.com/microsoft/onnxruntime/blob/main/orttraining/orttraining/python/training/optim/fp16_optimizer.py) is not actually doing anything, just falling back to the DeepSpeed fused Adam optimizer anyway, so this line and the warning it causes are redundant.

What does this PR do?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

@JingyaHuang

Deepspeed already includes the same FusedAdam FP16_Optimizer originally from NVIDIA/apex here: https://github.com/deepspeedai/DeepSpeed/blob/master/deepspeed/runtime/fp16/fused_optimizer.py

Currently this line gives a warning saying:
```
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/onnxruntime/training/optim/_modifier_registry.py:56: UserWarning: Skip modifying optimizer because of optimizer name not found in the registry: accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper
```

i.e. the FP16 Optimizer from onnxruntime (https://github.com/microsoft/onnxruntime/blob/main/orttraining/orttraining/python/training/optim/fp16_optimizer.py) is not actually wrapping the DeepSpeed fused Adam optimizer anyway, so this line is redundant.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant