Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 3ad3d0b

Browse files
committed
g_idx in _process_quantization
1 parent bea971c commit 3ad3d0b

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/sparseml/modifiers/quantization/gptq/utils/gptq_wrapper.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,9 +164,10 @@ def fasterprune(
164164

165165
elif hasattr(self.layer, "quantization_scheme"):
166166
quant_scheme = self.layer.quantization_scheme
167+
breakpoint()
167168
actorder = quant_scheme.weights.actorder
168-
if quant_scheme.weights is not None:
169169

170+
if quant_scheme.weights is not None:
170171
if actorder:
171172
perm = torch.argsort(torch.diag(self.H), descending=True)
172173
W = W[:, perm]

0 commit comments

Comments
 (0)