-
Notifications
You must be signed in to change notification settings - Fork 855
Description
🐛 Describe the bug
Trying to export a simple Dense layer to .pte via the aot_arm_compiler.py script without --delegate fails in transform_for_cortex_m_backend.
Here is the model under test:
import torch
ModelUnderTest = torch.nn.Linear(in_features=10, out_features=5)
ModelInputs = (torch.randn(1, 6, 10),)Here is the failure running the aot compiler without delegating to NPU, with quantization and with --enable_qdq_fusion_pass option:
::: Running command: /workspaces/nn-deploy-kit/.venv/bin/python -m examples.arm.aot_arm_compiler --model_name /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/mymodel.py --target ethos-u55-128 --system_config Ethos_U55_High_End_Embedded --memory_mode Shared_Sram --output /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/nDelegate_YQuantize_YFuseQDQ/model.pte --intermediates /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/nDelegate_YQuantize_YFuseQDQ/intermediates --quantize --enable_qdq_fusion_pass
::: CWD: /opt/executorch
W0224 09:19:53.445000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.451000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.453000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.454000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.456000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.458000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.459000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.461000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.462000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.464000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.465000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.466000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:19:53.467000 100035 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/quantizer/quantization_config.py:171: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(act_scale * weight_scale).to(
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
return cls.__new__(cls, *args)
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
return cls.__new__(cls, *args)
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/opt/executorch/examples/arm/aot_arm_compiler.py", line 896, in <module>
edge = transform_for_cortex_m_backend(edge, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/executorch/examples/arm/aot_arm_compiler.py", line 847, in transform_for_cortex_m_backend
else pass_cls()
^^^^^^^^^^
TypeError: XNNPACKPass.__init__() missing 1 required positional argument: 'exported_program'
If, on the other hand I omit --enable_qdq_fusion_pass flag, then the compiler produces the .pte file but the operations inside are not quantized - the qdq pairs are present in the graph (as visualized with model-explorer --extensions=pte_adapter_model_explorer):
::: Running command: /workspaces/nn-deploy-kit/.venv/bin/python -m examples.arm.aot_arm_compiler --model_name /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/mymodel.py --target ethos-u55-128 --system_config Ethos_U55_High_End_Embedded --memory_mode Shared_Sram --output /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/nDelegate_YQuantize_nFuseQDQ/model.pte --intermediates /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/nDelegate_YQuantize_nFuseQDQ/intermediates --quantize
::: CWD: /opt/executorch
W0224 09:18:47.523000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.528000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.529000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.530000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.532000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.534000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.535000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.536000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.537000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.538000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.539000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.540000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
W0224 09:18:47.541000 99701 torch/utils/flop_counter.py:45] triton not found; flop counting will not work for triton kernels
/workspaces/nn-deploy-kit/.venv/lib/python3.12/site-packages/executorch/backends/arm/quantizer/quantization_config.py:171: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(act_scale * weight_scale).to(
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
return cls.__new__(cls, *args)
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
return cls.__new__(cls, *args)
/usr/lib/python3.12/copyreg.py:99: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
return cls.__new__(cls, *args)
PTE file saved as /workspaces/nn-deploy-kit/repro/aot_compiler_cortex_m/nDelegate_YQuantize_nFuseQDQ/model.pte
Versions
Collecting environment information...
PyTorch version: 2.11.0.dev20251222+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.31.10
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 8 2026, 11:30:50) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 5990.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves vnmi avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==1.2.0a0+8981fe4
[pip3] numpy==2.4.2
[pip3] optree==0.18.0
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.11.0.dev20251222+cpu
[pip3] torchao==0.16.0+git026b76d12
[pip3] torchaudio==2.10.0.dev20251222+cpu
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.0.0
[pip3] torchvision==0.25.0.dev20251222+cpu
[conda] Could not collect
cc @digantdesai @SS-JIA @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell
Metadata
Metadata
Labels
Type
Projects
Status