Closed HGSilveri closed 5 months ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
From my understanding, we want users to test their full workflow on an emulator before sending it to a QPU. Currently there is a gap where the validation for emulators and QPU are not the same, so a working workflow on emulator may fail once the user switches to a QPU. This is what we attempt to fix by introducing this
mimic_qpu
feature (strict_validation
on the cloud side). However the gap between emulators and QPU does not stop at validation: for example the type of results available for a QPU is only a bitstring counter (and bitstring history) while more types of results are available for emulators. There is probably other gaps we could cite here.This means a user could potentially design a workflow on emulators using features that are not available on a QPU and they wouldn't know before they switch to the QPU backend.
On the cloud side I was thinking about introducing a new type of emulator "FresnelEmulator" with the same feature set as Fresnel (validation, type of results, ...). This could also let us extend the feature set of emulators without worrying about missing features on the QPU side. The user should target this emulator if he wants to test a workflow before running in on Fresnel. If they have no plans to run on a QPU and simply want to experiment, they can use any of EMU_FREE or EMU_TN with extended features.
On pulser-pasqal this could be achieved by introducing a new backend class: it would have the same
validate_sequence
method as aQPUBackend
and in general the same interface as theQPUBackend
(I assume more features are available for local backends right now than forQPUBackend
).What do you think about this? We can have a call if you want more details
I agree that right now we are only mimicking the "QPU experience" up until submission and that we should extend this all the way until the results. That being said, having a specifc QPUEmulator
backend on Pulser does not sound like the best fit for pulser-pasqal
. Looking at how an emulator backend differs from a QPU backend, there are two fundamental differences:
An emulator accepts a config
while a QPU does not: I don't see this as an issue because most parameters just influence how the results are computed; for the select few that influence how the results are returned (in particular, whether the emulator returns the results at every time steps or just at the end of the computation), we can forbid these being different from the "qpu-like option" when mimic_qpu=True
. Then, when the backend is switched to a QPU, the worst effect that will happen is that there will be a config
argument that should not be there, but then the user just omits it and the workflow stays unchanged.
The returned results may include extra information in the way of the e.g. the statevector at the end of the computation instead of the bitstrings. However, these richer results can be converted into simpler results where only the bitstrings are returned(ie like in a QPU).
Therefore, I would argue that with the mimic_qpu
option we can already go further and enforce that the returned results match the same format as a QPU, so we don't need to define a special backend for this.
From my understanding, we want users to test their full workflow on an emulator before sending it to a QPU. Currently there is a gap where the validation for emulators and QPU are not the same, so a working workflow on emulator may fail once the user switches to a QPU. This is what we attempt to fix by introducing this
mimic_qpu
feature (strict_validation
on the cloud side). However the gap between emulators and QPU does not stop at validation: for example the type of results available for a QPU is only a bitstring counter (and bitstring history) while more types of results are available for emulators. There is probably other gaps we could cite here. This means a user could potentially design a workflow on emulators using features that are not available on a QPU and they wouldn't know before they switch to the QPU backend. On the cloud side I was thinking about introducing a new type of emulator "FresnelEmulator" with the same feature set as Fresnel (validation, type of results, ...). This could also let us extend the feature set of emulators without worrying about missing features on the QPU side. The user should target this emulator if he wants to test a workflow before running in on Fresnel. If they have no plans to run on a QPU and simply want to experiment, they can use any of EMU_FREE or EMU_TN with extended features. On pulser-pasqal this could be achieved by introducing a new backend class: it would have the samevalidate_sequence
method as aQPUBackend
and in general the same interface as theQPUBackend
(I assume more features are available for local backends right now than forQPUBackend
). What do you think about this? We can have a call if you want more detailsI agree that right now we are only mimicking the "QPU experience" up until submission and that we should extend this all the way until the results. That being said, having a specifc
QPUEmulator
backend on Pulser does not sound like the best fit forpulser-pasqal
. Looking at how an emulator backend differs from a QPU backend, there are two fundamental differences:1. An emulator accepts a `config` while a QPU does not: I don't see this as an issue because most parameters just influence how the results are computed; for the select few that influence how the results are returned (in particular, whether the emulator returns the results at every time steps or just at the end of the computation), we can forbid these being different from the "qpu-like option" when `mimic_qpu=True`. Then, when the backend is switched to a QPU, the worst effect that will happen is that there will be a `config` argument that should not be there, but then the user just omits it and the workflow stays unchanged. 2. The returned results _may_ include extra information in the way of the e.g. the statevector at the end of the computation instead of the bitstrings. However, these richer results can be converted into simpler results where only the bitstrings are returned(ie like in a QPU).
Therefore, I would argue that with the
mimic_qpu
option we can already go further and enforce that the returned results match the same format as a QPU, so we don't need to define a special backend for this.
Ok so it looks like the gap between emulator backends and a QPUBackend is not that big so indeed a simple flag might do the trick. I would argue that a user may not be aware of the limitations for the type of results of a QPU compared to an emulator and that this could lead to confusion if not made obvious from the python interface itself - but that might be very naive.
Anyway we can start with this and reconsider if the gap widens.
Another reason why I wanted to challenge the current design is that right now there are a lot of confusing aspects for a user when selecting a device:
But this is a more generic complaint than just the mimic_qpu
flag.
Another reason why I wanted to challenge the current design is that right now there are a lot of confusing aspects for a user when selecting a device:
* there is a device embedded in the sequence, which constrains the register and pulse at sequence creation
True, and this is backend independent. The backend just determines which types of devices you can use, with emulator backends being more permissive and allowing VirtualDevice
, for example.
* then we use a backend which may or may not override the device embedded in the sequence
We may "override" the device, but the strict=True
option ensures the sequence stays the same, so this has no effect on the results (plus, the user doesn't even need to know this is happening).
* optionally, the backend can be passed a flag which will change the constrains from the first bullet point
This is not really true. The constraints on the register and pulses are not influenced by mimic_qpu
. The only constraints that are actually selectively enforced pertain to the backend execution itself.
I think perhaps the issue here is one of nomenclature. Device
suggests that the sequence will be executed on a particular device, which I agree can be misleading. In hindsight, perhaps a name like DeviceSpecs
would have been clearer, since in the end that's all they are. They restrict the sequence and it's execution, but they don't bind the sequence to that specific device.
An emulator accepts a
config
while a QPU does not: I don't see this as an issue because most parameters just influence how the results are computed; for the select few that influence how the results are returned (in particular, whether the emulator returns the results at every time steps or just at the end of the computation), we can forbid these being different from the "qpu-like option" whenmimic_qpu=True
. Then, when the backend is switched to a QPU, the worst effect that will happen is that there will be aconfig
argument that should not be there, but then the user just omits it and the workflow stays unchanged.The returned results may include extra information in the way of the e.g. the statevector at the end of the computation instead of the bitstrings. However, these richer results can be converted into simpler results where only the bitstrings are returned(ie like in a QPU).
@MatthieuMoreau0 I considered adding these changes here but I think they would be better suited in a new PR. I feel like this PR is already big enough as is and encapsulates well the "validation" side of things. We can in the future make a new one for the computation/results side. How do you feel about this?
mimic_qpu=True
, a backend will enforce as many QPU validation checks as possible.QPUBackend.validate_sequence()
andQPUBackend.validate_job_params()
so they can easily be used by external tools.noise_model
module is no longer insidebackend
Closes #661