Skip to content

The contiguity of input tensors is lost when copied for the Method::execute in executorch runtimeΒ #9930

Open
@JoshuaGhost

Description

@JoshuaGhost

πŸ› Describe the bug

Description:

I'm encountering a runtime error when executing exported .pte models using the ExecuTorch runtime. The issue shows up when running models with multiple intermediate layers (requiring β‰₯50 input tensors).

Error Details

The runtime fails with the following consistency check errors:

[tensor_util_portable.cpp:128] Check failed (all_contiguous || all_channels_last): 2 input tensors have different dim orders
[op_permute_copy.cpp:50] Check failed (tensors_have_same_dim_order(in, out)):
[method.cpp:1313] KernelCall failed at instruction 0:3 in operator aten::permute_copy.out: 0x12
[method.cpp:1323] arg 0 with type id 1
[method.cpp:1323] arg 1 with type id 8
[method.cpp:1323] arg 2 with type id 1
[method.cpp:1323] arg 3 with type id 1
Traceback (most recent call last):
  File "my_runtime_caller.py", line 79, in execute
    return self._module.run_method(self._method_name, inputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: method->execute() failed with error 0x12

Debugging Steps Taken

  1. Input Verification:

    • Explicitly enforced contiguous memory format before execution:
      _inputs = [_input.contiguous() for _input in inputs]
      for x in _inputs:
          print(x.shape, x.dim_order())  # Confirmed all show (0, 1, 2, ...) (contiguous) order
    • All inputs verify as contiguous before method.execute()
  2. Runtime Inspection:
    Modified tensor_util_portable.cpp to log dimension orders after memcpy:

    // Debug output added after memcpy
    std::cout << "Src dim_order: ";  // Shows (0, 1, 2)
    std::cout << "Dst dim_order: ";  // Third tensor shows (1, 0, 2) inconsistency

    Observed that the third tensor's dimension order changes during copying despite identical source/destination shapes.

  3. Reproducibility Notes:

    • The issue only manifests in larger models (β‰₯50 input tensors, some of the inputs are inputs to the intermediate layers)
    • Cannot reproduce with minimal test cases
    • Intermediate layer tensors appear to be affected. This error only occurs when the module with layers accepting intermediate input contains β‰₯ 2 such layers

Key Questions

  1. Memory Format Guarantee:
    How can we ensure runtime-preserved memory format consistency between Python-specified inputs and runtime tensor handling?

  2. Memcpy Impact:
    Is the observed dimension order alteration during memcpy an expected behavior? Could this be the root cause of my different dim orders failures?

  3. Debugging Guidance:
    What additional verification steps would you recommend to isolate this issue?

Additional Context

  • Model architecture details cannot be shared, but I can provide:
    • Specific tensor shape patterns
    • Memory format transition traces
    • Custom instrumentation outputs

Would appreciate any insights into:

  • Relevant code paths to inspect
  • Known limitations around tensor format preservation
  • Alternative approaches to validate tensor consistency

Thank you for your time and expertise. This issue is revised using generative AI for better readability and to avoid any confusion caused by my non-native English. Thank you for your understanding.

Versions

PyTorch version: 2.7.0.dev20250131+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Oracle Linux Server 9.4 (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3.0.1)
Clang version: 14.0.6
CMake version: version 3.31.4
Libc version: glibc-2.34

Python version: 3.11.7 (main, Oct 9 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3.0.1)] (64-bit runtime)
Python platform: Linux-5.15.0-207.156.6.el9uek.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9J14 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 5192.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves nt_good avx512_bf16 clzero xsaveerptr wbnoinvd arat npt nrip_save vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+1eb2f94
[pip3] numpy==2.2.3
[pip3] torch==2.7.0.dev20250131+cpu
[pip3] torchao==0.8.0+git11333ba2
[pip3] torchaudio==2.6.0.dev20250131+cpu
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.6.0
[pip3] torchvision==0.22.0.dev20250131+cpu
[pip3] triton==3.2.0
[conda] Could not collect

cc @larryliu0820 @JacobSzwejbka

Metadata

Metadata

Assignees

Labels

module: runtimeIssues related to the core runtime and code under runtime/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions