Skip to content

XNN Int8 ReLU gives incorrect outputs #10961

@GregoryComer

Description

@GregoryComer

🐛 Describe the bug

When running standalone ReLU ops in int8 on XNNPACK, there appears to be some sort of correctness or memory safety issue causing incorrect outputs. Playing around with various inputs, there doesn't seem to be a clear pattern, which makes me suspect that there is an issue handling the tensor memory or dtype somewhere.

Repro:

import torch
from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner
from executorch.exir import to_edge_transform_and_lower
from executorch.runtime import Runtime

class Model(torch.nn.Module):
    def forward(self, x):
        return torch.nn.ReLU()(x)

inputs = (
    torch.Tensor([-128, -127, -126, -10, 5]).to(torch.int8),

)
et_program = to_edge_transform_and_lower(
    torch.export.export(Model(), inputs),
    partitioner=[XnnpackPartitioner()]
).to_executorch()

print(et_program.exported_program())

runtime = Runtime.get()
program = runtime.load_program(et_program.buffer)
method = program.load_method("forward")
et_out = method.execute(inputs)
ref_out = Model()(*inputs)

print(f"ET:  {et_out}")
print(f"Ref: {ref_out}")

Output:

ET:  [tensor([0, 0, 0, 0, 0], dtype=torch.int8)]
Ref: tensor([0, 0, 0, 0, 5], dtype=torch.int8)

Versions

N/A

cc @digantdesai @mcr229 @cbilgin

Metadata

Metadata

Assignees

No one assigned

    Labels

    backend testerThis bug was found by the backend test suite.module: xnnpackIssues related to xnnpack delegation and the code under backends/xnnpack/

    Type

    Projects

    Status

    To triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions