Skip to content

Commit ecdbea7

Browse files
rohan-varmafacebook-github-bot
authored andcommitted
Fix DDP documentation (pytorch#46861)
Summary: Pull Request resolved: pytorch#46861 Noticed that in the DDP documentation: https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html?highlight=distributeddataparallel there were some examples with `torch.nn.DistributedDataParallel`, fix this to read `torch.nn.parallel.DistributedDataParallel`. ghstack-source-id: 115453703 Test Plan: ci Reviewed By: pritamdamania87, SciPioneer Differential Revision: D24534486 fbshipit-source-id: 64b92dc8a55136c23313f7926251fe825a2cb7d5
1 parent 262bd64 commit ecdbea7

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

torch/nn/parallel/distributed.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ class DistributedDataParallel(Module):
329329
Example::
330330
331331
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...')
332-
>>> net = torch.nn.DistributedDataParallel(model, pg)
332+
>>> net = torch.nn.parallel.DistributedDataParallel(model, pg)
333333
"""
334334
def __init__(self, module, device_ids=None,
335335
output_device=None, dim=0, broadcast_buffers=True,
@@ -626,7 +626,7 @@ def no_sync(self):
626626
627627
Example::
628628
629-
>>> ddp = torch.nn.DistributedDataParallel(model, pg)
629+
>>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg)
630630
>>> with ddp.no_sync():
631631
>>> for input in inputs:
632632
>>> ddp(input).backward() # no synchronization, accumulate grads

0 commit comments

Comments
 (0)