-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Fix wrong behavior of DDPStrategy
option with simple GAN training using DDP
#20936
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
samsara-ku
wants to merge
19
commits into
Lightning-AI:master
Choose a base branch
from
samsara-ku:bugfix/gan-ddp-training
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
00727e8
add: `MultiModelDDPStrategy` and its execution codes
samsara-ku e6b061a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] aa9b027
refactor: extract block helper in GAN example
samsara-ku 5503d3a
Merge pull request #1 from samsara-ku/codex/add-tests-for-multimodeld…
samsara-ku dc128b4
Merge branch 'master' into bugfix/gan-ddp-training
Borda 1fb4027
with
Borda ec62397
Apply suggestions from code review
Borda ece7d38
formating
Borda 8b1fe23
misc: resolve some review comments for product consistency
samsara-ku 4b22284
misc: merge gan training example, add docstring of MultiModelDDPStrategy
samsara-ku 97dabf8
misc: add docstring of MultiModelDDPStrategy
samsara-ku 033e8e8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 57e864a
update
Borda f157f59
Merge branch 'master' into bugfix/gan-ddp-training
SkafteNicki 24872e7
long line
Borda 3891102
Merge branch 'master' into bugfix/gan-ddp-training
SkafteNicki 8121337
add: set base test case and __init__py for MultiModelDDPStrategy
samsara-ku c442fc3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] ab9b2dd
Merge branch 'master' into bugfix/gan-ddp-training
Borda File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,174 @@ | ||
# Copyright The Lightning AI team. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
import os | ||
|
||
import pytest | ||
import torch | ||
from torch.multiprocessing import ProcessRaisedException | ||
|
||
from lightning.pytorch import Trainer | ||
from lightning.pytorch.strategies import MultiModelDDPStrategy | ||
from lightning.pytorch.trainer import seed_everything | ||
from tests_pytorch.helpers.advanced_models import BasicGAN | ||
from tests_pytorch.helpers.runif import RunIf | ||
|
||
|
||
@RunIf(min_cuda_gpus=2, standalone=True, sklearn=True) | ||
def test_multi_gpu_with_multi_model_ddp_fit_only(tmp_path): | ||
dm = BasicGAN.train_dataloader() | ||
model = BasicGAN() | ||
trainer = Trainer( | ||
default_root_dir=tmp_path, max_epochs=1, accelerator="gpu", devices=-1, strategy=MultiModelDDPStrategy() | ||
) | ||
trainer.fit(model, datamodule=dm) | ||
|
||
|
||
@RunIf(min_cuda_gpus=2, standalone=True, sklearn=True) | ||
def test_multi_gpu_with_multi_model_ddp_predict_only(tmp_path): | ||
dm = BasicGAN.train_dataloader() | ||
model = BasicGAN() | ||
trainer = Trainer( | ||
default_root_dir=tmp_path, max_epochs=1, accelerator="gpu", devices=-1, strategy=MultiModelDDPStrategy() | ||
) | ||
trainer.predict(model, datamodule=dm) | ||
|
||
|
||
@RunIf(min_cuda_gpus=2, standalone=True, sklearn=True) | ||
def test_multi_gpu_multi_model_ddp_fit_predict(tmp_path): | ||
seed_everything(4321) | ||
dm = BasicGAN.train_dataloader() | ||
model = BasicGAN() | ||
trainer = Trainer( | ||
default_root_dir=tmp_path, max_epochs=1, accelerator="gpu", devices=-1, strategy=MultiModelDDPStrategy() | ||
) | ||
trainer.fit(model, datamodule=dm) | ||
trainer.predict(model, datamodule=dm) | ||
|
||
|
||
class UnusedParametersBasicGAN(BasicGAN): | ||
def __init__(self): | ||
super().__init__() | ||
mnist_shape = (1, 28, 28) | ||
self.intermediate_layer = torch.nn.Linear(mnist_shape[-1], mnist_shape[-1]) | ||
|
||
def training_step(self, batch, batch_idx): | ||
with torch.no_grad(): | ||
img = self.intermediate_layer(batch[0]) | ||
batch[0] = img # modify the batch to use the intermediate layer result | ||
return super().training_step(batch, batch_idx) | ||
|
||
|
||
@RunIf(standalone=True) | ||
def test_find_unused_parameters_ddp_spawn_raises(): | ||
"""Test that the DDP strategy can change PyTorch's error message so that it's more useful for Lightning users.""" | ||
trainer = Trainer( | ||
accelerator="cpu", | ||
devices=1, | ||
strategy=MultiModelDDPStrategy(), | ||
max_steps=2, | ||
logger=False, | ||
) | ||
with pytest.raises( | ||
ProcessRaisedException, match="It looks like your LightningModule has parameters that were not used in" | ||
): | ||
trainer.fit(UnusedParametersBasicGAN()) | ||
|
||
|
||
@RunIf(standalone=True) | ||
def test_find_unused_parameters_ddp_exception(): | ||
"""Test that the DDP strategy can change PyTorch's error message so that it's more useful for Lightning users.""" | ||
trainer = Trainer( | ||
accelerator="cpu", | ||
devices=1, | ||
strategy=MultiModelDDPStrategy(), | ||
max_steps=2, | ||
logger=False, | ||
) | ||
with pytest.raises(RuntimeError, match="It looks like your LightningModule has parameters that were not used in"): | ||
trainer.fit(UnusedParametersBasicGAN()) | ||
|
||
|
||
class CheckOptimizerDeviceModel(BasicGAN): | ||
def configure_optimizers(self): | ||
assert all(param.device.type == "cuda" for param in self.parameters()) | ||
super().configure_optimizers() | ||
|
||
|
||
@RunIf(min_cuda_gpus=1) | ||
def test_model_parameters_on_device_for_optimizer(): | ||
"""Test that the strategy has moved the parameters to the device by the time the optimizer gets created.""" | ||
model = CheckOptimizerDeviceModel() | ||
trainer = Trainer( | ||
default_root_dir=os.getcwd(), | ||
fast_dev_run=1, | ||
accelerator="gpu", | ||
devices=1, | ||
strategy=MultiModelDDPStrategy(), | ||
) | ||
trainer.fit(model) | ||
|
||
|
||
class BasicGANCPU(BasicGAN): | ||
def on_train_start(self) -> None: | ||
# make sure that the model is on CPU when training | ||
assert self.device == torch.device("cpu") | ||
|
||
|
||
@RunIf(skip_windows=True) | ||
def test_multi_model_ddp_with_cpu(): | ||
"""Tests if device is set correctly when training for MultiModelDDPStrategy.""" | ||
trainer = Trainer( | ||
accelerator="cpu", | ||
devices=-1, | ||
strategy=MultiModelDDPStrategy(), | ||
fast_dev_run=True, | ||
) | ||
# assert strategy attributes for device setting | ||
assert isinstance(trainer.strategy, MultiModelDDPStrategy) | ||
assert trainer.strategy.root_device == torch.device("cpu") | ||
model = BasicGANCPU() | ||
trainer.fit(model) | ||
|
||
|
||
class BasicGANGPU(BasicGAN): | ||
def on_train_start(self) -> None: | ||
# make sure that the model is on GPU when training | ||
assert self.device == torch.device(f"cuda:{self.trainer.strategy.local_rank}") | ||
self.start_cuda_memory = torch.cuda.memory_allocated() | ||
|
||
|
||
@RunIf(min_cuda_gpus=2, skip_windows=True, standalone=True) | ||
def test_multi_model_ddp_with_gpus(): | ||
"""Tests if device is set correctly when training and after teardown for MultiModelDDPStrategy.""" | ||
trainer = Trainer( | ||
accelerator="gpu", | ||
devices=-1, | ||
strategy=MultiModelDDPStrategy(), | ||
fast_dev_run=True, | ||
enable_progress_bar=False, | ||
enable_model_summary=False, | ||
) | ||
# assert strategy attributes for device setting | ||
assert isinstance(trainer.strategy, MultiModelDDPStrategy) | ||
local_rank = trainer.strategy.local_rank | ||
assert trainer.strategy.root_device == torch.device(f"cuda:{local_rank}") | ||
|
||
model = BasicGANGPU() | ||
|
||
trainer.fit(model) | ||
|
||
# assert after training, model is moved to CPU and memory is deallocated | ||
assert model.device == torch.device("cpu") | ||
cuda_memory = torch.cuda.memory_allocated() | ||
assert cuda_memory < model.start_cuda_memory |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.