Skip to content

Commit 00100f2

Browse files
committed
Merge: [DLRM/PyT] 21.10 container update with added graph support and BYOD capability
2 parents 41cc33f + 4154bda commit 00100f2

File tree

74 files changed

+14200
-14498
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

74 files changed

+14200
-14498
lines changed

PyTorch/Recommendation/DLRM/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:21.04-py3
15+
ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:21.10-py3
1616
FROM ${FROM_IMAGE_NAME}
1717

1818
ADD requirements.txt .

PyTorch/Recommendation/DLRM/README.md

+555-343
Large diffs are not rendered by default.

PyTorch/Recommendation/DLRM/bind.sh

-1
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,6 @@ esac
196196
################################################################################
197197

198198
if [ "${#numactl_args[@]}" -gt 0 ] ; then
199-
set -x
200199
exec numactl "${numactl_args[@]}" -- "${@}"
201200
else
202201
exec "${@}"

PyTorch/Recommendation/DLRM/dlrm/cuda_ext/dot_based_interact.py

+3-5
Original file line numberDiff line numberDiff line change
@@ -14,28 +14,26 @@
1414

1515
import torch
1616
from torch.autograd import Function
17-
from apex import amp
17+
1818

1919
if torch.cuda.get_device_capability()[0] >= 8:
20-
print('Using the Ampere-optimized dot interaction kernels')
2120
from dlrm.cuda_ext import interaction_ampere as interaction
2221
else:
23-
print('Using the Volta-optimized dot interaction kernels')
2422
from dlrm.cuda_ext import interaction_volta as interaction
2523

2624

2725
class DotBasedInteract(Function):
2826
""" Forward and Backward paths of cuda extension for dot-based feature interact."""
2927

3028
@staticmethod
31-
@amp.half_function
29+
@torch.cuda.amp.custom_fwd(cast_inputs=torch.half)
3230
def forward(ctx, input, bottom_mlp_output):
3331
output = interaction.dotBasedInteractFwd(input, bottom_mlp_output)
3432
ctx.save_for_backward(input)
3533
return output
3634

3735
@staticmethod
38-
@amp.half_function
36+
@torch.cuda.amp.custom_bwd
3937
def backward(ctx, grad_output):
4038
input, = ctx.saved_tensors
4139
grad, mlp_grad = interaction.dotBasedInteractBwd(input, grad_output)

0 commit comments

Comments
 (0)