microsoft / qsharp-language

Official repository for design of the quantum programming language Q# and its core libraries
MIT License
235 stars 56 forks source link

Add concept of a quantum-or-classical value #58

Open Strilanc opened 3 years ago

Strilanc commented 3 years ago

Suggestion

When writing arithmetic code, there is a tendency to repeat the same method multiple times based on whether certain inputs are quantum or classical. For example, you would have one method for xoring a LittleEndian into another LittleEndian, and a separate method for xoring a BigInt into a LittleEndian. It would be convenient if there was instead a xor method that accepted a BigIntOrLittleEndian and handled both cases.

This doesn't just apply at the level of designing methods. It also occurs when writing methods. For example, suppose you are creating a binary tree of qubits representing the AND of leaf input qubits as follows:

let n = Length(input_qubits);
using (work_qubits = Qubit[n-1]) {
    let qs = input_qubits + work_qubits;
    for (i in 0..n-2) {
        CCX(qs[2*i], qs[2*i+1], qs[n+i]);
    }

One of the problems you will run into is that if n is not guaranteed to be a power of 2, then the qubit at position n+n/2 is talking about a mix of leaf qubits at the end of the input and intermediate node qubits representing leaf qubits at the start of the input. To fix this, you can pad out to a power of 2:

let n = CeilPow2(Length(input_qubits));
using (work_qubits = Qubit[2*n-Length(input_qubits)]) {
    let qs = input_qubits + work_qubits;
    for (i in 0..n-2) {
        CCX(qs[2*i], qs[2*i+1], qs[n+i]);
    }

The problem now is that you are allocating unnecessary qubits to simplify your code. But if you were able to make an array of qubits-or-bits, and CCX understood that when one of its controls was a bit it should become a CX or no-op as appropriate, then you could do this:

let n = CeilPow2(Length(input_qubits));
let padding = RepeatArray(n - Length(input_qubits), False);
using (work_qubits = Qubit[n-1) {
    let qs = input_qubits + padding + work_qubits;
    for (i in 0..n-2) {
        CCX(qs[2*i], qs[2*i+1], qs[n+i]);
    }

And everything would just work. Although to be fair this is still not perfect; the unnecessary allocation is half as large but still there.

Considerations

This is a bit tricky to add to the language because at the moment the language is very picky about types. There's no way to say "this array contains a mix of qubits and bits", or "the control of this method can be a qubit or a bit".

Please add ticks by placing a cross in the box:

Please tick all that apply:

bamarsha commented 3 years ago

I could imagine two ways to do this...

A discriminated union that can contain either a qubit or a boolean:

type Bit =
    | ClassicalBit(Bool)
    | QuantumBit(Qubit)
    ;

CCNOT could take in Bit as its control type instead of Qubit. However, then every qubit and boolean would need to be wrapped in ClassicalBit(false) or QuantumBit(q).

A type class could also work:

class Bit<'a> {
    ControlOnBit(bit : 'a, op : 'b => Unit) : 'b => Unit;
}

instance Bit<Bool> {
    ControlOnBit(bit, op) {
        return bit ? op : NoOp;
    }
}

instance Bit<Qubit> {
    ControlOnBit(bit, op) {
        return Controlled op([bit], _);
    }
}

This would be a little more ergonomic because you wouldn't need to wrap your qubits or booleans in anything.

It might be strange for operations to take in a different type for the control bit than for the target bit, though.

Also, in general there's a pretty strong separation between classical and quantum types in Q#. I think @bettinaheim would know more about this and whether it would make sense with Q#'s design to try to blur the distinction between the two in certain cases.