-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update whisper transformer module to 4.48.0 #24382
base: main
Are you sure you want to change the base?
Conversation
@@ -7,6 +7,7 @@ | |||
|
|||
import numpy as np | |||
import torch | |||
import transformers |
Check notice
Code scanning / CodeQL
Module is imported with 'import' and 'import from' Note
Module 'onnxruntime.test.python.transformers' is imported with both 'import' and 'import from'.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
The best way to fix the problem is to remove the from transformers import AutoConfig, AutoTokenizer
statement and use the transformers.AutoConfig
and transformers.AutoTokenizer
directly in the code. This approach maintains the existing functionality while eliminating the confusion caused by the dual import.
- Remove the
from transformers import AutoConfig, AutoTokenizer
statement. - Replace all instances of
AutoConfig
andAutoTokenizer
withtransformers.AutoConfig
andtransformers.AutoTokenizer
, respectively.
-
Copy modified line R32 -
Copy modified line R67
@@ -10,3 +10,2 @@ | ||
import transformers | ||
from transformers import AutoConfig, AutoTokenizer | ||
|
||
@@ -32,3 +31,3 @@ | ||
def get_sample_inputs( | ||
config: AutoConfig, | ||
config: transformers.AutoConfig, | ||
device: torch.device, | ||
@@ -67,3 +66,3 @@ | ||
def get_sample_with_past_kv_inputs( | ||
config: AutoConfig, | ||
config: transformers.AutoConfig, | ||
device: torch.device, |
import torch | ||
import transformers |
Check notice
Code scanning / CodeQL
Module is imported with 'import' and 'import from' Note
Module 'onnxruntime.test.python.transformers' is imported with both 'import' and 'import from'.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix the problem, we should remove the from transformers import AutoConfig
statement and use transformers.AutoConfig
instead. This will ensure that the transformers
module is only imported once, reducing confusion and potential namespace conflicts.
- Remove the
from transformers import AutoConfig
statement. - Replace all instances of
AutoConfig
withtransformers.AutoConfig
.
-
Copy modified line R29 -
Copy modified line R36 -
Copy modified line R42 -
Copy modified line R105
@@ -28,3 +28,3 @@ | ||
from models.torch_export_patches.cache_helper import make_dynamic_cache | ||
from transformers import AutoConfig | ||
|
||
|
||
@@ -35,3 +35,3 @@ | ||
|
||
def get_sequence_lengths(args: argparse.Namespace, config: AutoConfig): | ||
def get_sequence_lengths(args: argparse.Namespace, config: transformers.AutoConfig): | ||
past_sequence_length, curr_sequence_length = (8, 1) if args.use_past_kv else (0, 8) | ||
@@ -41,3 +41,3 @@ | ||
|
||
def get_inputs(args: argparse.Namespace, config: AutoConfig): | ||
def get_inputs(args: argparse.Namespace, config: transformers.AutoConfig): | ||
# Dummy values for parity | ||
@@ -104,3 +104,3 @@ | ||
pytorch_model: None | torch.nn.Module = None, | ||
config: None | AutoConfig = None, | ||
config: None | transformers.AutoConfig = None, | ||
): |
def _catch_produce_guards_and_solve_constraints( | ||
previous_function: Callable, | ||
fake_mode: "FakeTensorMode", | ||
gm: "torch.fx.GraphModule", | ||
dynamic_shapes: dict[str, Any] | tuple[Any] | list[Any] | None, | ||
equalities_inputs: "EqualityConstraint", # noqa: F821 | ||
original_signature: inspect.Signature, | ||
_is_torch_jit_trace: bool = False, | ||
verbose: int = 0, | ||
): |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix the problem, we need to add an explicit return statement at the end of the _catch_produce_guards_and_solve_constraints
function. This will ensure that the function always returns a value, even when an exception is caught and the if
conditions are not met. The explicit return statement should return None
to maintain the existing functionality.
-
Copy modified line R44
@@ -43,3 +43,3 @@ | ||
) | ||
|
||
return None | ||
|
def patch__check_input_constraints_for_graph( | ||
previous_function: Callable, | ||
input_placeholders: list[torch.fx.Node], | ||
flat_args_with_path, | ||
range_constraints, | ||
verbose: int = 0, | ||
) -> None: |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix the problem, we need to add an explicit return statement at the end of the function patch__check_input_constraints_for_graph
. This ensures that the function consistently returns a value, making the code easier to read and understand. The explicit return value should be None
to maintain the existing functionality.
-
Copy modified line R67
@@ -66,3 +66,3 @@ | ||
) | ||
|
||
return None | ||
|
# if config.print_specializations: | ||
# self.log.warning( | ||
# "Specializing %s to %s", self.var_to_sources[a][0].name(), tgt |
Check notice
Code scanning / CodeQL
Commented-out code Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix the problem, we should remove the commented-out code. This will make the code cleaner and reduce potential confusion for future developers. If the logging statement is needed in the future, it can be reintroduced with proper documentation.
- Remove the commented-out logging statement on lines 304-308.
- Ensure that the removal does not affect the existing functionality of the code.
-
Copy modified lines R304-R308
@@ -303,7 +303,7 @@ | ||
|
||
# if config.print_specializations: | ||
# self.log.warning( | ||
# "Specializing %s to %s", self.var_to_sources[a][0].name(), tgt | ||
# ) | ||
# self.log.debug("SPECIALIZATION", stack_info=True) | ||
|
||
|
||
|
||
|
||
|
||
assert msg != "range_refined_to_singleton", ( |
# if input_ids.shape[1] == 0: | ||
# inputs_embeds = inputs_embeds[:, -cache_position.shape[0] :] | ||
# else: | ||
# if cache_position[-1] >= input_ids.shape[1]: | ||
# input_ids = input_ids[:, -cache_position.shape[0] :] | ||
# else: | ||
# if input_ids.shape[1] != cache_position.shape[0]: | ||
# input_ids = input_ids[:, cache_position] |
Check notice
Code scanning / CodeQL
Commented-out code Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix the problem, we should remove the commented-out code. This will make the code cleaner and less confusing for future developers. The removal should be done in the _cache_dependant_input_preparation_exporting
method, specifically lines 280 to 288.
-
Copy modified line R280
@@ -279,11 +279,3 @@ | ||
else: | ||
# This is the code we need to implemented with torch.cond. | ||
# if input_ids.shape[1] == 0: | ||
# inputs_embeds = inputs_embeds[:, -cache_position.shape[0] :] | ||
# else: | ||
# if cache_position[-1] >= input_ids.shape[1]: | ||
# input_ids = input_ids[:, -cache_position.shape[0] :] | ||
# else: | ||
# if input_ids.shape[1] != cache_position.shape[0]: | ||
# input_ids = input_ids[:, cache_position] | ||
|
||
def branch_1(inputs_embeds, cache_position): |
Description
Motivation and Context
Branched off from #24291