Closed mpharrigan closed 6 months ago
Rough PR plan:
This is in reference to how on_classical_vals() treats the classical version of the data type, correct? I'm finding that as I build new arithmetic bloqs out of previous ones I've built (ModAdd()
requires a GreaterThan()
bloq) each sub-bloq is taking issue with the type. For example a simple bb.join()
operation on the sign bit in the following commit required a change to the classical sim of the XGate()
. Not sure on the best way to proceed with this.
https://github.com/quantumlib/Qualtran/commit/78938148e8fdc6dc0f7b24291f60e842cb1df9c8
classical simulation is part of it for sure. I'm not sure what the best way forward is. If an arithmetic operation has a different decomposition dependent on the type of the inputs then it probably makes sense to dispatch based on the register type (or just be very explicit with AddInt, AddUnsignedInt, AddFixedPoint... bloqs), rather than trying reinterpret the bitsize.
Stripping out some discussion from the PRs:
What are the desired semantics of 0-bitsize registers? Presently, an edge in the CompositeBloq
graph represents a data dependency; but a 0-bitsize variable can't contain any data.
Something that other programming languages have that we do not is the concept of default values for functions (bloqs). E.g. you could have a register named "ctrl" with a default value of "active". and then callers don't have to provide a ctrl soquet.
I'm warming to the idea of 1-bit versions of other types. E.g. QInt(1) would store the values -1 and 0. I'm wondering if we should nix QBit()
in favor of QUInt(1) or QAny(1) depending on the context. Maybe have QBit()
as an alias for QAny(1)
.
Should we be worried that the datatypes are pretty biased towards the Z/computational basis? Is the |+> state a superposition of the unsigned integers 0 and 1 or is it the unsigned integer 0 in the X basis?
Another food for thought - Should we replace QUInt
and QInt
with QFxp
? The latter is a strict superset and we can generalize the code at many places to support either the general version of QFxp
or add a validator to raise an error if fractional bitsize is non-zero.
QAny is also a superset
I think integer types are more natural to most and one of the benefits of typing is making the code more explicit right? I think it could be a bit confusing if we need to write everything as a QFxp(n, 0).
Or do you mean inheriting from QFxp and just constructing it appropriately? Alternatively we could update the types to return a Fxp string for each type if that would be helpful?
I think it could be a bit confusing if we need to write everything as a QFxp(n, 0)
I agree. My motivation to propose the idea was mainly related to classical simulation. I think it'd make our lives easier if we change the definition of ClassicalValT
from Union[int, NDArray[int]]
to Union[Fxp, NDArray[Fxp]]
and the Fxp
data type is a good classical representation for the different types we care about.
It's not a strong suggestion though, till the time we can make classical simulation work well for the different supported types.
The classical values are just supposed to be the classical (python) analog to the qdtype. So if your bloq uses QInt, the classical value will be Python int
. If your bloq uses QFxp
, then yes the classical analog would likely be fxmath.Fxp
The classical values are just supposed to be the classical (python) analog to the qdtype. So if your bloq uses QInt, the classical value will be Python int.
Python int
doesn't have a restriction on bitsize
, QInt(bitsize)
does. A true representation in classical simulation (which is important for error analysis) would be to have a classical data type corresponding to QInt(bitsize)
, which would likely be constructed using fxpmath.Fxp
. Therefore, I think we'd have to accept fxmath.Fxp
objects as the default type in classical simulation irrespective of whether the qdtype
is a QFxp
/ QUInt
/ QInt
.
The underlying reason is that, at least so far, we only ever do fixed point quantum arithmetic and we often do care about the exact bitsize - unlike classical programming languages where we most often deal with either non parameterized fixed size integers (32 / 64 bits) or floating point arithmetic.
currently we use a python int
and use if statements to validate its range
This is done, woo
This is still not finished since we have open PRs for propagating data types to classical simulation. Is there another issue where using QDtypes in classical simulation is tracked?
At present, we have
Register
which contains all the attributes to define part of a function signature: the name and the side attributes go towards the function signature. The bitsize (and shape) argument describes the quantum data type. This singular family of dependent quantum data types is pretty coarse grained. We can extend our library of available data types to includeThings that are not in scope are
These elements would comprise a full type system, but would add a lot of complexity and are likely not relevant for the type of resource estimates we aim to do.
Support for qudit registers (i.e. arbitrary dimension d units and registers thereof) and/or quopit (prime dimension) are not out of scope but they shall only be added if there’s a clear resource-estimation use-case for them that cannot be accomplished with Register+bitsize or bounded domain registers.