Skip to content

[ARM] ConvTranspose2d fails to fallback to CPU (Non-passthrough operation could not run on NPU) #17668

@jonasdaugalas

Description

@jonasdaugalas

🐛 Describe the bug

Trying to compile a simple Transposed Convolution layer via aot_arm_compiler.py fails to fallback to CPU.

Here is the model under test:

import torch

ModelUnderTest = torch.nn.ConvTranspose2d(in_channels=3, out_channels=1, kernel_size=(5, 5), stride=(3, 3), padding=(2, 2))
ModelInputs = (torch.randn(1, 3, 4, 4),)

I am not sure how much TFLite->Vela flavor is relevant here, but according to the supported ops documentation at https://gitlab.arm.com/artificial-intelligence/ethos-u/ethos-u-vela/-/blob/main/SUPPORTED_OPS.md?ref_type=heads TRANSPOSE_CONV with stride > 2 is not supported. That is the case in the model defined above. If same/similar constraints apply here in the ExecuTorch flow that would be my assumption why the op cannot be delegated.

Regardless of the non-delegation reason I would expect this operator to fallback to CPU (with an explanation why). <- Expected behavior

Actual behavior
Instead, I get the following error, without the .pte file generated:

::: Running command: /workspaces/nn-deploy-kit/.venv/bin/python -m examples.arm.aot_arm_compiler --model_name /workspaces/nn-deploy-kit/repro/convtranspose2d/mymodel.py --target ethos-u55-128 --system_config Ethos_U55_High_End_Embedded --memory_mode Shared_Sram --output /workspaces/nn-deploy-kit/repro/convtranspose2d/out_YDelegate_YQuantize_nFuseQDQ/model.pte --intermediates /workspaces/nn-deploy-kit/repro/convtranspose2d/out_YDelegate_YQuantize_nFuseQDQ/intermediates --quantize --delegate
::: CWD: /opt/executorch
W0224 09:51:48.048000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.055000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.057000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.058000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.060000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.062000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.064000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.064000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.066000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.067000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.068000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.069000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:51:48.070000 108724 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/quantizer/quantization_config.py:171: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor).
  return torch.tensor(act_scale * weight_scale).to(
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return cls.__new__(cls, *args)
WARNING:root:Op aten.silu_.default was requested for preservation by partitioner.  This request is ignored because it is mutable.
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return cls.__new__(cls, *args)
WARNING:root:Op aten.silu_.default was requested for preservation by partitioner.  This request is ignored because it is mutable.
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return cls.__new__(cls, *args)
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/opt/executorch/examples/arm/aot_arm_compiler.py", line 885, in <module>
    model_quant, edge = to_edge_TOSA_delegate(
                        ^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/executorch/examples/arm/aot_arm_compiler.py", line 787, in to_edge_TOSA_delegate
    edge = to_edge_transform_and_lower(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/program/_program.py", line 116, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/program/_program.py", line 1397, in to_edge_transform_and_lower
    edge_manager = edge_manager.to_backend(method_to_partitioner)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/program/_program.py", line 116, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/program/_program.py", line 1699, in to_backend
    new_edge_programs = to_backend(method_to_programs_and_partitioners)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/functools.py", line 909, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/backend/backend_api.py", line 762, in _
    lower_all_submodules_to_backend(
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/backend/backend_api.py", line 591, in lower_all_submodules_to_backend
    backend_name_to_subclass[backend_id].preprocess_multimethod(
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/exir/backend/backend_details.py", line 145, in preprocess_multimethod
    preprocess_result = cls.preprocess(program, compile_spec_for_program)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/ethosu/backend.py", line 111, in preprocess
    binary = EthosUBackend._compile_tosa_flatbuffer(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/ethosu/backend.py", line 73, in _compile_tosa_flatbuffer
    binary = vela_compile(
             ^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/arm_vela.py", line 134, in vela_compile
    return run(intermediate_path)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/arm_vela.py", line 77, in run
    vela.main(" ".join(args).split(" "))
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/ethosu/vela/vela.py", line 933, in main
    process_regor(
  File "/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/ethosu/vela/vela.py", line 142, in process_regor
    compiled_model = regor.compile(accelerator, network, fmt, system_config, options=options, verbose=True)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Non-passthrough operation (TransposeConv2D) could not run on NPU

Versions

Collecting environment information...
PyTorch version: 2.11.0.dev20251222+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.31.10
Libc version: glibc-2.39

Python version: 3.12.3 (main, Jan 8 2026, 11:30:50) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 5990.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves vnmi avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] executorch==1.2.0a0+8981fe4
[pip3] numpy==2.4.2
[pip3] optree==0.18.0
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.11.0.dev20251222+cpu
[pip3] torchao==0.16.0+git026b76d12
[pip3] torchaudio==2.10.0.dev20251222+cpu
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.0.0
[pip3] torchvision==0.25.0.dev20251222+cpu
[conda] Could not collect

cc @digantdesai @SS-JIA @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell

Metadata

Metadata

Assignees

No one assigned

    Labels

    partner: armFor backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm

    Type

    No type

    Projects

    Status

    To triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions