QuEraComputing / bloqade-python

QuEra's Neutral Atom SDK for Analog QPUs
https://bloqade.quera.com/
Other
55 stars 14 forks source link

Batch submission implementation for Builder. #104

Closed weinbe58 closed 1 year ago

weinbe58 commented 1 year ago

TODO:

weinbe58 commented 1 year ago

More I think about it, I am not sure if we need to have the Emit. assign method. In principle, it should be sufficient to pass the assignments in as kwargs/dictionary into the functions used to generate the tasks.

something like

def quera(self, nshots:int, **assignments):
    ....

or

def quera(self, nshots: int, assignments: Dict[str, Union[Number, List[Number]]]):
    ....

Also for parameter sweeps, those assignments should be lists but would it make sense to make that a separate method for easier parsing of the assignment?

weinbe58 commented 1 year ago

Actually I think a better design is to have another option batch_assign:

builder = (
    location.Square(6)
    .rydberg.detuning.uniform.piecewise_linear(
        durations=["up_time","anneal_time","up_time"],
        values=[
            "initial_detuning",
            "initial_detuning",
            "final_detuning",
            "final_detuning"
        ]
    )
    .rydberg.rabi.amplitude.uniform.piecewise_linear(
        durations=["up_time","anneal_time","up_time"],
        values=[0,"rabi_amplitude_max","rabi_amplitude_max",0]
        )
    .assign(
        initial_detuning = -15,
        final_detuning = 10,
        rabi_amplitude_max = 15,
        up_time = 0.1
    )
    .batch_assign(
        anneal_time = np.linspace(0.1, 3.8, 51)
    )
)
Roger-luo commented 1 year ago

I agree batch_assign is better, that's also why there was an explicit assign function in the builder.

initially, when I think about this I was actually referring to the matrix in GitHub action config as the concept appears in lots of other places for parallel executing different parameter combinations.

weinbe58 commented 1 year ago

This issue will be blocked until #115 is merged

Roger-luo commented 1 year ago

from Friday meeting with @weinbe58 , the result type of batch task and single task should be the same data structure, which means the result type of single task will have a batch size 1.

weinbe58 commented 1 year ago

@Roger-luo I think the batch_assign assignment along with assign also should not mutate the builder object. In fact, by this point, we should freeze all building operations of the sequence. Hence I think the basic concept for Emit should be the following:

class Emit(Builder):
    # NOTE: this will mutate the builder
    # because once methods inside this class are called
    # the building process will terminate
    # none of the methods in this class will return a Builder

    def __init__(self, builder: Builder, assignments = {}, batch = {}) -> None:
        super().__init__(builder)
        self.__assignments__ = assignments
        self.__batch__ = batch
        self.__sequence__ = None
        self.__register__ = None

    # These methods terminate the build so no more build steps can happen.
    def assign(self, **assignments):
        new_assignments = dict(self.__assignments__)
        new_assignments.update(**assignments)
        return Emit(self.__parent__, assignments=new_assignments, batch=self.__batch__)

    def batch_assign(self, **batch):
        new_batch = dict(self.__batch__)
        new_batch.update(**batch)
        return Emit(self.__parent__, assignments=self.__assignments__, batch=new_batch)

I will also modify the AST builder to allow for explicit instances of Emit which will fix any issues related to this change.