You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Upgrade mypy
This commit removes the flag (and cd step) from f53aa37 which we added to get mypy to treat namespaces correctly. This was apparently a bug in mypy, or behavior they decided to change. To get the new behavior, we must upgrade mypy. (This also allows us to remove a couple `# type: ignore` comment that are no longer needed.)
This commit runs changes the version of mypy and runs `poetry lock`. It also conforms the whitespace of files in this project to the expectations of various tools and standard (namely: removing trailing whitespace as expected by git and enforcing the existence of one and only one newline at the end of a file as expected by unix and github.) It also uses https://github.com/hauntsaninja/no_implicit_optional to automatically upgrade codebase due to a change in mypy behavior. For a similar reason, it also fixes a new type (or otherwise) errors:
* "Return type 'Retry' of 'new' incompatible with return type 'DatabricksRetryPolicy' in supertype 'Retry'"
* databricks/sql/auth/retry.py:225: error: object has no attribute update [attr-defined]
* /test_param_escaper.py:31: DeprecationWarning: invalid escape sequence \) [as it happens, I think it was also wrong for the string not to be raw, because I'm pretty sure it wants all of its backslashed single-quotes to appear literally with the backslashes, which wasn't happening until now]
* ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject [this is like a numpy version thing, which I fixed by being stricter about numpy version]
---------
Signed-off-by: wyattscarpenter <[email protected]>
* Incorporate suggestion.
I decided the most expedient way of dealing with this type error was just adding the type ignore comment back in, but with a `[attr-defined]` specifier this time. I mean, otherwise I would have to restructure the code or figure out the proper types for a TypedDict for the dict and I don't think that's worth it at the moment.
Signed-off-by: wyattscarpenter <[email protected]>
---------
Signed-off-by: wyattscarpenter <[email protected]>
Copy file name to clipboardExpand all lines: .github/workflows/code-quality-checks.yml
+1-2
Original file line number
Diff line number
Diff line change
@@ -161,6 +161,5 @@ jobs:
161
161
#----------------------------------------------
162
162
- name: Mypy
163
163
run: |
164
-
cd src # Need to be in the actual databricks/ folder or mypy does the wrong thing.
165
164
mkdir .mypy_cache # Workaround for bad error message "error: --install-types failed (no mypy cache directory)"; see https://github.com/python/mypy/issues/10768#issuecomment-2178450153
166
-
poetry run mypy --config-file ../pyproject.toml --install-types --non-interactive --namespace-packages databricks
165
+
poetry run mypy --install-types --non-interactive src
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+2-3
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ If you set your `user.name` and `user.email` git configs, you can sign your comm
74
74
This project uses [Poetry](https://python-poetry.org/) for dependency management, tests, and linting.
75
75
76
76
1. Clone this respository
77
-
2. Run `poetry install`
77
+
2. Run `poetry install`
78
78
79
79
### Run tests
80
80
@@ -167,5 +167,4 @@ Modify the dependency specification (syntax can be found [here](https://python-p
167
167
-`poetry update`
168
168
-`rm poetry.lock && poetry install`
169
169
170
-
Sometimes `poetry update` can freeze or run forever. Deleting the `poetry.lock` file and calling `poetry install` is guaranteed to update everything but is usually _slower_ than `poetry update`**if `poetry update` works at all**.
171
-
170
+
Sometimes `poetry update` can freeze or run forever. Deleting the `poetry.lock` file and calling `poetry install` is guaranteed to update everything but is usually _slower_ than `poetry update`**if `poetry update` works at all**.
Copy file name to clipboardExpand all lines: docs/parameters.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ SELECT * FROM table WHERE field = %(value)s
43
43
44
44
## Python Syntax
45
45
46
-
This connector follows the [PEP-249 interface](https://peps.python.org/pep-0249/#id20). The expected structure of the parameter collection follows the paramstyle of the variables in your query.
46
+
This connector follows the [PEP-249 interface](https://peps.python.org/pep-0249/#id20). The expected structure of the parameter collection follows the paramstyle of the variables in your query.
47
47
48
48
### `named` paramstyle Usage Example
49
49
@@ -85,7 +85,7 @@ The result of the above two examples is identical.
85
85
86
86
Databricks Runtime expects variable markers to use either `named` or `qmark` paramstyles. Historically, this connector used `pyformat` which Databricks Runtime does not support. So to assist assist customers transitioning their codebases from `pyformat` → `named`, we can dynamically rewrite the variable markers before sending the query to Databricks. This happens only when `use_inline_params=False`.
87
87
88
-
This dynamic rewrite will be deprecated in a future release. New queries should be written using the `named` paramstyle instead. And users should update their client code to replace `pyformat` markers with `named` markers.
88
+
This dynamic rewrite will be deprecated in a future release. New queries should be written using the `named` paramstyle instead. And users should update their client code to replace `pyformat` markers with `named` markers.
89
89
90
90
For example:
91
91
@@ -106,7 +106,7 @@ SELECT field1, field2, :param1 FROM table WHERE field4 = :param2
106
106
107
107
Under the covers, parameter values are annotated with a valid Databricks SQL type. As shown in the examples above, this connector accepts primitive Python types like `int`, `str`, and `Decimal`. When this happens, the connector infers the corresponding Databricks SQL type (e.g. `INT`, `STRING`, `DECIMAL`) automatically. This means that the parameters passed to `cursor.execute()` are always wrapped in a `TDbsqlParameter` subtype prior to execution.
108
108
109
-
Automatic inferrence is sufficient for most usages. But you can bypass the inference by explicitly setting the Databricks SQL type in your client code. All supported Databricks SQL types have `TDbsqlParameter` implementations which you can import from `databricks.sql.parameters`.
109
+
Automatic inferrence is sufficient for most usages. But you can bypass the inference by explicitly setting the Databricks SQL type in your client code. All supported Databricks SQL types have `TDbsqlParameter` implementations which you can import from `databricks.sql.parameters`.
110
110
111
111
`TDbsqlParameter` objects must always be passed within a list. Either paramstyle (`:named` or `?`) may be used. However, if your query uses the `named` paramstyle, all `TDbsqlParameter` objects must be provided a `name` when they are constructed.
112
112
@@ -158,7 +158,7 @@ Rendering parameters inline is supported on all versions of DBR since these quer
158
158
159
159
## SQL Syntax
160
160
161
-
Variables in your SQL query can look like `%(param)s` or like `%s`.
161
+
Variables in your SQL query can look like `%(param)s` or like `%s`.
162
162
163
163
#### Example
164
164
@@ -172,7 +172,7 @@ SELECT * FROM table WHERE field = %s
172
172
173
173
## Python Syntax
174
174
175
-
This connector follows the [PEP-249 interface](https://peps.python.org/pep-0249/#id20). The expected structure of the parameter collection follows the paramstyle of the variables in your query.
175
+
This connector follows the [PEP-249 interface](https://peps.python.org/pep-0249/#id20). The expected structure of the parameter collection follows the paramstyle of the variables in your query.
176
176
177
177
### `pyformat` paramstyle Usage Example
178
178
@@ -210,7 +210,7 @@ with sql.connect(..., use_inline_params=True) as conn:
210
210
211
211
The result of the above two examples is identical.
212
212
213
-
**Note**: `%s` is not compliant with PEP-249 and only works due to the specific implementation of our inline renderer.
213
+
**Note**: `%s` is not compliant with PEP-249 and only works due to the specific implementation of our inline renderer.
214
214
215
215
**Note:** This `%s` syntax overlaps with valid SQL syntax around the usage of `LIKE` DML. For example if your query includes a clause like `WHERE field LIKE '%sequence'`, the parameter inlining function will raise an exception because this string appears to include an inline marker but none is provided. This means that connector versions below 3.0.0 it has been impossible to execute a query that included both parameters and LIKE wildcards. When `use_inline_params=False`, we will pass `%s` occurrences along to the database, allowing it to be used as expected in `LIKE` statements.
Copy file name to clipboardExpand all lines: examples/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ We provide example scripts so you can see the connector in action for basic usag
7
7
- DATABRICKS_TOKEN
8
8
9
9
Follow the quick start in our [README](../README.md) to install `databricks-sql-connector` and see
10
-
how to find the hostname, http path, and access token. Note that for the OAuth examples below a
10
+
how to find the hostname, http path, and access token. Note that for the OAuth examples below a
11
11
personal access token is not needed.
12
12
13
13
@@ -38,7 +38,7 @@ To run all of these examples you can clone the entire repository to your disk. O
38
38
-**`set_user_agent.py`** shows how to customize the user agent header used for Thrift commands. In
39
39
this example the string `ExamplePartnerTag` will be added to the the user agent on every request.
40
40
-**`staging_ingestion.py`** shows how the connector handles Databricks' experimental staging ingestion commands `GET`, `PUT`, and `REMOVE`.
41
-
-**`sqlalchemy.py`** shows a basic example of connecting to Databricks with [SQLAlchemy 2.0](https://www.sqlalchemy.org/).
41
+
-**`sqlalchemy.py`** shows a basic example of connecting to Databricks with [SQLAlchemy 2.0](https://www.sqlalchemy.org/).
42
42
-**`custom_cred_provider.py`** shows how to pass a custom credential provider to bypass connector authentication. Please install databricks-sdk prior to running this example.
43
43
-**`v3_retries_query_execute.py`** shows how to enable v3 retries in connector version 2.9.x including how to enable retries for non-default retry cases.
44
-
-**`parameters.py`** shows how to use parameters in native and inline modes.
44
+
-**`parameters.py`** shows how to use parameters in native and inline modes.
0 commit comments