Skip to content

Commit 03333c5

Browse files
NXP backend: Fix shared quantization bugs. (#13844)
### Summary Fix 2 bugs related to quantization parameters that are shared between multiple tensors/nodes: - Turn off bias tensor reuse in Convolution converter - Fix _has_shared_q_params_if_quantized in Node converter ### Test plan No direct unit tests are provided. Correct functionality is tested by all tests with quantized nodes. --- Co-authored-by: Roman Janik <[email protected]>
1 parent 8c51641 commit 03333c5

File tree

2 files changed

+3
-8
lines changed

2 files changed

+3
-8
lines changed

backends/nxp/backend/ir/converter/node_converter.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -132,13 +132,8 @@ def _has_shared_q_params_if_quantized(node: Node) -> bool:
132132
# Some exotic operator (only consumer or only produces)
133133
return True
134134

135-
pre_node = node.prev
136-
post_node = node.next
137-
138-
if pre_node.name == node.all_input_nodes[0] and post_node.name == node.users[0]:
139-
raise RuntimeError(
140-
"Prev & next nodes are not the same as inputs and outputs."
141-
)
135+
pre_node = node.all_input_nodes[0]
136+
post_node = list(node.users)[0]
142137

143138
if _is_dequant_node(pre_node) and _is_quant_node(post_node):
144139
# Node is quantized

backends/nxp/backend/ir/converter/node_converters/ops_converters/convolution_converter.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ def _convert_unpadded_2D(
263263
)
264264

265265
b = self.builder.create_zeros_tensor(
266-
[output_channels], "zero_bias", bias_type, True
266+
[output_channels], "zero_bias", bias_type, False
267267
)
268268

269269
# Compute scale and zero point for bias tensor

0 commit comments

Comments
 (0)