Skip to content

Commit 4d11385

Browse files
committed
fix: Linter + config fix (#2636)
1 parent e7fe504 commit 4d11385

File tree

2 files changed

+5
-7
lines changed

2 files changed

+5
-7
lines changed

.pre-commit-config.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ repos:
4747
hooks:
4848
- id: ruff
4949
- repo: https://github.com/psf/black
50-
rev: 23.7.0
50+
rev: 24.1.1
5151
hooks:
5252
- id: black
5353
exclude: ^examples/custom_converters/elu_converter/setup.py|^docs

py/torch_tensorrt/dynamo/lowering/passes/lower_scaled_dot_product_attention.py

+4-6
Original file line numberDiff line numberDiff line change
@@ -60,12 +60,10 @@ def lower_scaled_dot_product_attention(
6060
return gm
6161

6262

63-
def scaled_dot_product_attention_replacement() -> (
64-
Tuple[
65-
Sequence[Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor]],
66-
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor],
67-
]
68-
):
63+
def scaled_dot_product_attention_replacement() -> Tuple[
64+
Sequence[Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor]],
65+
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor],
66+
]:
6967
"""Constructs the original and replacement functions for efficient attention"""
7068

7169
# Efficient Attention original graph

0 commit comments

Comments
 (0)