-
Notifications
You must be signed in to change notification settings - Fork 617
Decompose aten.channel_shuffle op (#4243) #4259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…issue to handle this. For now filtering out failing tests.
…ffle operator was already included.
// | ||
// gets replaced with | ||
// X = input.split_dim(...) # shape (N, g, C, *) | ||
// X = X.permute(0, N+1, N, N+2, N+3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here by N
you actually mean dimN
? Can you update accordingly?
@annotate_args( | ||
[ | ||
None, | ||
([1, 8, 4, 4], torch.float32, True), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are dynamic dims input supported?
|
||
|
||
@register_test_case(module_factory=lambda: ChannelShuffle1D()) | ||
def ChannelShuffle1D_basic(module, tu: TestUtils): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not really 1D data, since it's being reshaped to 3d before passing to the channelShuffle
op?
// CHECK-DAG: %[[PERMUTE:.*]] = torch.aten.permute %[[EXPAND]], %[[PERMLIST]] : !torch.vtensor<[1,4,2,4,4],f32>, !torch.list<int> -> !torch.vtensor<[1,2,4,4,4],f32> | ||
// CHECK-DAG: %[[COLLAPSE:.*]] = torch.prims.collapse %[[PERMUTE]], %[[C1]], %[[C2]] : !torch.vtensor<[1,2,4,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,8,4,4],f32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't all these be CHECK
only since the order has to be maintained?
// CHECK-DAG: %[[C1:.*]] = torch.constant.int 1 | ||
// CHECK-DAG: %[[C3:.*]] = torch.constant.int 3 | ||
// CHECK-DAG: %[[C4:.*]] = torch.constant.int 4 | ||
// CHECK-DAG: %[[PERMLIST:.*]] = torch.prim.ListConstruct %[[C0]], %[[C2]], %[[C1]], %[[C3]], %[[C4]] : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list<int> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be able to replace 'CHECK-DAG' by 'CHECK' starting from line 858 to the end.
// CHECK-DAG: %[[EXPAND:.*]] = torch.prims.split_dim %[[ARG0]], %[[C1]], %[[C2]] : !torch.vtensor<[1,8,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,4,2,4,4],f32> | ||
// CHECK-DAG: %[[PERMUTE:.*]] = torch.aten.permute %[[EXPAND]], %[[PERMLIST]] : !torch.vtensor<[1,4,2,4,4],f32>, !torch.list<int> -> !torch.vtensor<[1,2,4,4,4],f32> | ||
// CHECK-DAG: %[[COLLAPSE:.*]] = torch.prims.collapse %[[PERMUTE]], %[[C1]], %[[C2]] : !torch.vtensor<[1,2,4,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,8,4,4],f32> | ||
// return %[[COLLAPSE]] : !torch.vtensor<[1,8,4,4],f32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing // CHECK in line# 862?
Support for the channel shuffle operator is added by torch dialect level decomposition (similar to the pixel_shuffle operation).
The decomposition is based on this specification:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
and implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4
Note that the operator consists of an expansion, expanded channel dimensions permute, and contraction of channel dimensions back to the original size. For example, for an input array of shape 1x8x4x4 with a group size of 2 would generate the MLIR linalg code below.
module {
func.func @channel_shuffle(%arg0: !torch.vtensor<[1, 8, 4, 4], f32>) -> !torch.vtensor<[1, 8, 4, 4], f32> {
%c0 = torch.constant.int 0
%c1 = torch.constant.int 1
%c2 = torch.constant.int 2
%c3 = torch.constant.int 3
%c4 = torch.constant.int 4
%dims = torch.prim.ListConstruct %c0, %c2, %c1, %c3, %c4 : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list
}
}
References:
PyTorch ChannelShuffle definition:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices (2017):
https://arxiv.org/pdf/1707.01083
A Lightweight Dendritic ShuffleNet for Medical Image Classification (2025)
https://www.jstage.jst.go.jp/article/transinf/advpub/0/advpub_2024EDP7059/_pdf
PyTorch implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4
Resolves #4243
@newling @silvasean @rsuderman @zjgarvey @penguin-wwy @rafaelubalmw @sahas3 @vinitdeodhar @alaa-ali @dixinzhou @ramiro050 @qedawkins