Skip to content

Modifies QAny to not have an implicit _Unit encoding#1813

Open
petim0 wants to merge 14 commits intoquantumlib:mainfrom
petim0:qany_modification
Open

Modifies QAny to not have an implicit _Unit encoding#1813
petim0 wants to merge 14 commits intoquantumlib:mainfrom
petim0:qany_modification

Conversation

@petim0
Copy link
Contributor

@petim0 petim0 commented Feb 16, 2026

This is an extension of the PR #1812.

The bug I encountered in #1812 mainly comes from the fact that QAny assumed a _Unit encoding. So I decided to change that, as QAny should not represent any encoding, the only assumed encoding is 0 in QAny is the bit-string 00...00.

I saw that before #1717 you had the following comment:
# TODO: Raise an error once usage of QAny is minimized across the library

In this PR I throw an error now so that nobody can use QAny "wrongly" anymore and I change the code where QAny was misused to transform QAny into QUInt. This enforces a better use of types in the code to prevent the backlog from growing too much. The warnings would not be permanent (I hope), and be transformed into errors when the usage of QAny is minimized across the library.

I don't know if you dropped the comment in #1717 because you felt like it was impossible to change the current code as the implicit encoding is used everywhere. I would understand that this is just a lost cause, and assuming a _UInt encoding is not that bad, it just causes problems if you aren't careful when doing classical simulations.

Note that I used the LegacyPartitionWarning warning, maybe I should create a LegacyQAnyWarning instead.

Tell me what you think.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@mpharrigan
Copy link
Collaborator

Thanks for investigating this! I'll have to look at the PR more closely, but wanted to offer some quick initial thoughts.

I don't think we'll ever get rid of QAny completely just because sometimes maybe you really just need a bag of bits. It's similar to (void*) which you probably shouldn't use day-to-day in your business logic but when you're mucking around with low level stuff it comes in handy.

I think the semantics of some operations are pretty clearly defined on QAny, like partitioning a QAny(n) into QAny(x), QAny(n-x). You can do this with a bag of bits (I guess this assumes the semantics are such that the bits are ordered).

Other operations -- not so much. I don't think you should be able to simulate the addition of QAnys. The semantics of "addition" require that your bits do indeed encode something crazy like a "number".

The classical simulation protocol uses Python values to simulate what would be quantum values flowing through your bloq composition. You could imagine having a Python type for each quantum type that better models the behavior of the quantum values. We could have QAny.from_bits return a ClassicalAnyVal which lets you split it but not add it.

We typically don't do this for conciseness and performance. When we first implemented fixed-point arithmetic operations we used a Fxp rich object in the classical simulator and it was disastrously slow; so it was changed to use our friends the Python integers to model classical instantiations of a QFxp. Very egregious, and I think there's an open issue somewhere to wrap this in maybe our own fixed point value class with better performance. Instead of putting the classical modeling logic in the classical model values, we just slap it into the on_classical_vals methods on the actual operations.

In summary, the way I think about it now is that there's an underlying abstract thing like a "quantum unsigned integer" whose type we can annotate but which we can model in different ways: a Python int, a python value class, a set of tensor indices, ...; and I think there is an underlying abstract thing "lumped register of qubits" that we could think about how to best model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants