-
Notifications
You must be signed in to change notification settings - Fork 687
Open
Labels
backend testerThis bug was found by the backend test suite.This bug was found by the backend test suite.module: vulkanIssues related to the Vulkan delegate and code under backends/vulkan/Issues related to the Vulkan delegate and code under backends/vulkan/
Description
🐛 Describe the bug
The regnet_y_1_6gf model from torchvision fails at runtime on the Vulkan backend with "ValueError: 'trunk_output_block1_block1-0_proj_0_weight_fused_bn' is not in list".
Error Excerpt:
File "/Users/gjcomer/src/executorch/src/executorch/backends/vulkan/vulkan_preprocess.py", line 154, in preprocess
program = apply_passes(
File "/Users/gjcomer/src/executorch/src/executorch/backends/vulkan/vulkan_preprocess.py", line 79, in apply_passes
new_gm_res = p(new_gm)
File "/Users/gjcomer/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/fx/passes/infra/pass_base.py", line 46, in __call__
res = self.call(graph_module)
File "/Users/gjcomer/src/executorch/src/executorch/backends/xnnpack/_passes/fuse_batch_norm.py", line 62, in call
self._fuse_ops(
File "/Users/gjcomer/src/executorch/src/executorch/backends/xnnpack/_passes/fuse_batch_norm.py", line 209, in _fuse_ops
fused_op_weight_node = create_constant_placeholder(
File "/Users/gjcomer/src/executorch/src/executorch/backends/transforms/utils.py", line 119, in create_constant_placeholder
node_index = node_names.index(name)
ValueError: 'trunk_output_block1_block1-0_proj_0_weight_fused_bn' is not in list
This can be reproduced with the following test case command or standalone script.
python -m executorch.backends.test.suite.runner models --flow vulkan --filter "test_regnet_y_1_6gf_vulkan_float32$"
Standalone repro:
import torch
import torchvision
from executorch.exir import to_edge_transform_and_lower
from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner
inputs = (torch.randn(1, 3, 224, 224),)
model = torchvision.models.regnet_y_1_6gf().eval()
ep = torch.export.export(model, inputs)
model = to_edge_transform_and_lower(
torch.export.export(model, inputs),
partitioner=[VulkanPartitioner()],
).to_executorch()
print("Running model...")
from executorch.extension.pybindings.portable_lib import _load_for_executorch_from_buffer
loaded_model = _load_for_executorch_from_buffer(model.buffer)
loaded_model([*inputs])
Note that running the backend test case requires executorch's python bindings to be built with the Vulkan backend. An example build command is below.
CMAKE_ARGS="-DEXECUTORCH_BUILD_VULKAN=ON" ./install_executorch.sh --editable
Versions
Commit fbda3a9, M1 Mac, using MoltenVK
Metadata
Metadata
Assignees
Labels
backend testerThis bug was found by the backend test suite.This bug was found by the backend test suite.module: vulkanIssues related to the Vulkan delegate and code under backends/vulkan/Issues related to the Vulkan delegate and code under backends/vulkan/