Skip to content

MPS: mobilebert inference error #7904

Open
Open
@guangy10

Description

@guangy10

🐛 Describe the bug

mobilebert+mps has been failing consistently: https://github.com/pytorch/executorch/actions/workflows/apple-perf.yml?query=event%3Aschedule

We will disable this model from continuous benchmarking and track the bug/fix here.

The stacktrace:

  Test Case '-[GenericTests test_load_mobilebert_mps_float16_pte_iOS_17_2_1_iPhone15_4]' started.
  loc("addmm/beta*bias*alpha*matmul_6"("(mpsFileLoc): /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":233:0)): error: input types 'tensor<8x128xf32>' and 'tensor<128xf16>' are not broadcast compatible
  LLVM ERROR: Failed to infer result type(s).

Job link: https://github.com/pytorch/executorch/actions/runs/12920386457/job/36038669108#step:14:2107

Versions

trunk

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: examplesIssues related to demos under examples/module: mpsIssues related to Apple's MPS delegation and code under backends/apple/mps/partner: appleFor backend delegation, kernels, demo, etc. from the 3rd-party partner, AppletriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions