Skip to content

Conversation

@neutrinoceros
Copy link
Contributor

@neutrinoceros neutrinoceros commented Nov 1, 2025

This is at a proof-of-concept stage: I haven't bothered writing tests for it yet, but I wanted to bring it up because I always feel like this feature is missing, and while complicated to implement right in the general case, it so happens that I wrote a small library that already does the heavy lifting needed to determine whether it is safe to build abi3 binaries in a portable fashion, so, before I commit more time to it, does it sound worth an extra dependency to others ?

@codecov
Copy link

codecov bot commented Nov 1, 2025

Codecov Report

❌ Patch coverage is 37.50000% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.61%. Comparing base (df08a28) to head (7dd3d6d).
⚠️ Report is 13 commits behind head on main.

Files with missing lines Patch % Lines
extension_helpers/_utils.py 37.50% 5 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main     #132       +/-   ##
===========================================
+ Coverage   58.31%   74.61%   +16.30%     
===========================================
  Files           4        4               
  Lines         379      386        +7     
===========================================
+ Hits          221      288       +67     
+ Misses        158       98       -60     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@neutrinoceros neutrinoceros force-pushed the feat/support-auto-abi3-target branch from bd0bcf1 to 34cd182 Compare November 1, 2025 13:53
@astrofrog
Copy link
Member

I'm not super keen on having an auto mode because it's usually good to be specific about whether one expects abi3 wheels or not, otherwise a package could silently start switching to non-abi3 wheels due to some issue with the interpreter or runtime_introspect.

However if others are keen on this I think we should make runtime_introspect an optional dependency.

@neutrinoceros
Copy link
Contributor Author

otherwise a package could silently start switching to non-abi3 wheels due to some issue with the interpreter or runtime_introspect.

Fair point. It should be easy enough to distinguish two auto modes actually:

  • "always": auto set the version and fail explicitly if not possible
  • "prefered": this corresponds to what I implemented first; auto set the version but silently fall back to non-abi3 if not possible

Would that work better for you ?
My main motivation is to make this env var usable unconditionally, even when incompatible free threading builds are included in a build matrix.

@neutrinoceros
Copy link
Contributor Author

Also, if PEP 809, or something similar, gets accepted, we could be facing the need to build more than one limited-api wheel per package as soon as next year. In which case, hard coding the target version, as is the only supported option now, would force users to build (temporary) additional logic to work around it.

1 similar comment
@neutrinoceros

This comment has been minimized.

@neutrinoceros neutrinoceros force-pushed the feat/support-auto-abi3-target branch from 34cd182 to 7dd3d6d Compare November 4, 2025 18:50
if py_limited_api is not None:
if py_limited_api == "auto":
fs = runtime_feature_set()
if fs.supports("py-limited-api"):
Copy link
Contributor Author

@neutrinoceros neutrinoceros Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, in case it helps, there is now a semantic distinction between the two possible falsy outputs:

  • False means that we know for sure the Limited API is not supported (you'd get this with CPython 3.13t or 3.14t)
  • None means "not sure", and basically indicates that it's time to patch and upgrade runtime-introspect

meanwhile, True is only ever returned in case support is established with certainty (that's the intended behavior any way). I'm being extremely thorough in my testing approach for this library, so I feel I can confidently say that, while false negatives are possible (but should be reported and fixed), false positives should never happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants