Skip to content

Support manually skipping some dtypes #265

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
asmeurer opened this issue May 17, 2024 · 0 comments · Fixed by #266
Closed

Support manually skipping some dtypes #265

asmeurer opened this issue May 17, 2024 · 0 comments · Fixed by #266

Comments

@asmeurer
Copy link
Member

PyTorch now "supports" higher precision unsigned int dtypes like uint16, uint32, and uint64, but the support is very poor, to the point where every test for a two-argument function will fail.

>>> torch.equal(torch.tensor(0, dtype=torch.int32), torch.tensor(0, dtype=torch.uint16))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Promotion for uint16, uint32, uint64 types is not supported, attempted to promote Int and UInt16

See data-apis/array-api-compat#138

I propose adding an environment variable ARRAY_API_TESTS_SKIP_DTYPES which could be set to something like uint16,uint32,uint64 to manually skip those dtypes (the same behavior as if they weren't present on the namespace). Skipping required dtypes would still not be supported.

asmeurer added a commit to asmeurer/array-api-tests that referenced this issue May 21, 2024
This requires using dtype strategies from hh instead of xps.

This also fixes some functions in linalg and fft that were incorrectly only
tested against real floating-point dtypes, due to Hypothesis's confusing
nomenclature of "floating_dtypes" being only real floating-point dtypes (and
this also eliminates the use of Hypothesis's confusing "scalar_dtypes").

Fixes data-apis#265
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant