Skip to content

Decompose aten.channel_shuffle op (#4243) #4259

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

ivangarcia44
Copy link
Contributor

Support for the channel shuffle operator is added by torch dialect level decomposition (similar to the pixel_shuffle operation).

The decomposition is based on this specification:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
and implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4

Note that the operator consists of an expansion, expanded channel dimensions permute, and contraction of channel dimensions back to the original size. For example, for an input array of shape 1x8x4x4 with a group size of 2 would generate the MLIR linalg code below.

module {
func.func @channel_shuffle(%arg0: !torch.vtensor<[1, 8, 4, 4], f32>) -> !torch.vtensor<[1, 8, 4, 4], f32> {
%c0 = torch.constant.int 0
%c1 = torch.constant.int 1
%c2 = torch.constant.int 2
%c3 = torch.constant.int 3
%c4 = torch.constant.int 4
%dims = torch.prim.ListConstruct %c0, %c2, %c1, %c3, %c4 : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list

%reshaped = torch.prims.split_dim %arg0, %c1, %c2 : !torch.vtensor<[1, 8, 4, 4], f32>, !torch.int, !torch.int -> !torch.vtensor<[1, 4, 2, 4, 4], f32>

%permuted = torch.aten.permute %reshaped, %dims : !torch.vtensor<[1, 4, 2, 4, 4], f32>, !torch.list -> !torch.vtensor<[1, 2, 4, 4, 4], f32>

%collapsed = torch.prims.collapse %permuted, %c1, %c2 : !torch.vtensor<[1, 2, 4, 4, 4], f32>, !torch.int, !torch.int -> !torch.vtensor<[1, 8, 4, 4], f32>

return %collapsed : !torch.vtensor<[1, 8, 4, 4], f32>

}
}
References:
PyTorch ChannelShuffle definition:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices (2017):
https://arxiv.org/pdf/1707.01083
A Lightweight Dendritic ShuffleNet for Medical Image Classification (2025)
https://www.jstage.jst.go.jp/article/transinf/advpub/0/advpub_2024EDP7059/_pdf
PyTorch implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4

Resolves #4243

@newling @silvasean @rsuderman @zjgarvey @penguin-wwy @rafaelubalmw @sahas3 @vinitdeodhar @alaa-ali @dixinzhou @ramiro050 @qedawkins

@ivangarcia44 ivangarcia44 marked this pull request as draft July 8, 2025 21:03
@ivangarcia44 ivangarcia44 marked this pull request as ready for review July 8, 2025 21:58
//
// gets replaced with
// X = input.split_dim(...) # shape (N, g, C, *)
// X = X.permute(0, N+1, N, N+2, N+3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here by N you actually mean dimN ? Can you update accordingly?

@annotate_args(
[
None,
([1, 8, 4, 4], torch.float32, True),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are dynamic dims input supported?



@register_test_case(module_factory=lambda: ChannelShuffle1D())
def ChannelShuffle1D_basic(module, tu: TestUtils):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not really 1D data, since it's being reshaped to 3d before passing to the channelShuffle op?

Comment on lines +864 to +865
// CHECK-DAG: %[[PERMUTE:.*]] = torch.aten.permute %[[EXPAND]], %[[PERMLIST]] : !torch.vtensor<[1,4,2,4,4],f32>, !torch.list<int> -> !torch.vtensor<[1,2,4,4,4],f32>
// CHECK-DAG: %[[COLLAPSE:.*]] = torch.prims.collapse %[[PERMUTE]], %[[C1]], %[[C2]] : !torch.vtensor<[1,2,4,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,8,4,4],f32>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't all these be CHECK only since the order has to be maintained?

// CHECK-DAG: %[[C1:.*]] = torch.constant.int 1
// CHECK-DAG: %[[C3:.*]] = torch.constant.int 3
// CHECK-DAG: %[[C4:.*]] = torch.constant.int 4
// CHECK-DAG: %[[PERMLIST:.*]] = torch.prim.ListConstruct %[[C0]], %[[C2]], %[[C1]], %[[C3]], %[[C4]] : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list<int>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to replace 'CHECK-DAG' by 'CHECK' starting from line 858 to the end.

// CHECK-DAG: %[[EXPAND:.*]] = torch.prims.split_dim %[[ARG0]], %[[C1]], %[[C2]] : !torch.vtensor<[1,8,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,4,2,4,4],f32>
// CHECK-DAG: %[[PERMUTE:.*]] = torch.aten.permute %[[EXPAND]], %[[PERMLIST]] : !torch.vtensor<[1,4,2,4,4],f32>, !torch.list<int> -> !torch.vtensor<[1,2,4,4,4],f32>
// CHECK-DAG: %[[COLLAPSE:.*]] = torch.prims.collapse %[[PERMUTE]], %[[C1]], %[[C2]] : !torch.vtensor<[1,2,4,4,4],f32>, !torch.int, !torch.int -> !torch.vtensor<[1,8,4,4],f32>
// return %[[COLLAPSE]] : !torch.vtensor<[1,8,4,4],f32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing // CHECK in line# 862?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide torch to linalg lowering for the torch.aten.channel_shuffle operation
3 participants