Skip to content

Error running .pte model with executor_runner #8923

Open
@corehalt

Description

@corehalt

🐛 Describe the bug

I have exported this model:

https://github.com/corehalt/share/raw/refs/heads/main/yolov8n_runtime_issue.pte

with the following code:

from executorch.exir.passes.constant_prop_pass import constant_prop_pass
from executorch.exir.passes.const_prop_pass import ConstPropPass
from executorch import exir

self.model.eval()
aten_dialect_program = torch.export.export(self.model, (self.im,), strict=True)
torch.export.save(aten_dialect_program, "ep.pt2")
aten_dialect_program = constant_prop_pass(aten_dialect_program)
edge_dialect_program = exir.to_edge_transform_and_lower(aten_dialect_program, transform_passes=[ConstPropPass()])
executorch_program = edge_dialect_program.to_executorch(
    exir.ExecutorchBackendConfig(
        passes=[],
        remove_view_copy=False
    )
)
fpte = self.file.with_suffix(".pte")
with open(str(fpte), "wb") as fil:
    fil.write(executorch_program.buffer)

Then I tried to run the model with the official C++ executor_runner and but I get the next error:

gdb --args ../build/third_party/executorch/executor_runner --model_path /tmp/yolov8/yolov8n.pte
(gdb) where
#0  main (argc=1, argv=0x7fffffffde28) at /test/third_party/executorch/examples/portable/executor_runner/executor_runner.cpp:244
(gdb) list
239       ET_LOG(Info, "Inputs prepared.");
240
241       // Run the model.
242       for (uint32_t i = 0; i < FLAGS_num_executions; i++) {
243         Error status = method->execute();
244         ET_CHECK_MSG(
245             status == Error::Ok,
246             "Execution of method %s failed with status 0x%" PRIx32,
247             method_name,
248             (uint32_t)status);
(gdb) print status
$1 = executorch::runtime::Error::InvalidArgument
(gdb) print method_name
$4 = 0x5555561e4718 "forward"

With other models and using the same code, the inference runs without problem.
I also wrote another executor based on the official one but I also get the same error there.
Other things I tried is to use strict=False on torch.export() but still it gives me the same error.

For reference, this is the corresponding output of torch.export.save():

https://github.com/corehalt/share/raw/refs/heads/main/yolov8n_runtime_issue.pt2

Versions

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+7103bb3
[pip3] numpy==2.1.1
[pip3] torch==2.7.0.dev20250131+cpu
[pip3] torchao==0.8.0+git11333ba2
[pip3] torchaudio==2.6.0.dev20250131+cpu
[pip3] torchsr==1.0.4
[pip3] torchvision==0.22.0.dev20250131+cpu

cc @JacobSzwejbka

Metadata

Metadata

Assignees

Labels

module: runtimeIssues related to the core runtime and code under runtime/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

In progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions