Closed a-corni closed 1 year ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
DetuningMap.draw displays the detuning map as a trap: When displaying the detuning maps from a sequence, only the atoms of the register are shown with the weights of the detuning map I propose to indicate the weights of the detuning map in the case reserved for the addressing of the channels (instead of GLOBAL) Here is how 3D traps are represented:
One edge case I stombled upon: having a detuning map different than the register:
Drawing a sequence with DMM, Rydberg Global having a modulation bandwidth: I have taken into account the modes "input", "output" and "input+output", and the possibility to draw the amplitude and detuning. For each asked mode (one for input, one for modulated output), one curve is shown for each asked quantity (one for the amplitude, one for the detuning):
Looking good! I see you went for the amplitude curve in the end... I guess it's not that much extra effort. One question: how does this look with 100 qubits?
Ah, one other thing: how about using a dashed line for the modulated output to make it clearer?
Yes, it was not adding any complexity to display the amplitude. I will make a test with many qubits, I am also a bit worried about the scaling with 100 qubits... Eventually I could label the set with the biggest number of atoms as "the rest" or something like that. Here is one example with local addressing:
Yes, it was not adding any complexity to display the amplitude. I will make a test with many qubits, I am also a bit worried about the scaling with 100 qubits... Eventually I could label the set with the biggest number of atoms as "the rest" or something like that. Here is one example with local addressing:
This reminds me: what happens when you have multiple basis? Do you make a plot for the ground-rydberg
and one for the digital
? How do you tell them apart?
Looking good! I see you went for the amplitude curve in the end... I guess it's not that much extra effort. One question: how does this look with 100 qubits?
It does not scale well:
Propositions:
I figured it would increase visibility to have the legend displayed on one graph (the color and targets are the same for each plot). However, perhaps it is not that clear. You tell me. Eventually, we could display the legend in another graph/box below the two graphs.
It seems that the RTD build has some issues with the images in the Bayesian Optimization tutorial, it's throwing these errors
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:35: WARNING: image file not readable: tutorials/attachment:AFstate_1D.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:35: WARNING: image file not readable: tutorials/attachment:AFstate_1D.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:230: WARNING: image file not readable: tutorials/attachment:interpolated_pulses.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:230: WARNING: image file not readable: tutorials/attachment:interpolated_pulses.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:639: WARNING: image file not readable: tutorials/attachment:diagram_baye_opt.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:639: WARNING: image file not readable: tutorials/attachment:diagram_baye_opt.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:760: WARNING: image file not readable: tutorials/attachment:Average_convs.png
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:760: WARNING: image file not readable: tutorials/attachment:Average_convs.png
Do these ring any bell to you?
It seems that the RTD build has some issues with the images in the Bayesian Optimization tutorial, it's throwing these errors
/home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:35: WARNING: image file not readable: tutorials/attachment:AFstate_1D.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:35: WARNING: image file not readable: tutorials/attachment:AFstate_1D.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:230: WARNING: image file not readable: tutorials/attachment:interpolated_pulses.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:230: WARNING: image file not readable: tutorials/attachment:interpolated_pulses.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:639: WARNING: image file not readable: tutorials/attachment:diagram_baye_opt.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:639: WARNING: image file not readable: tutorials/attachment:diagram_baye_opt.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:760: WARNING: image file not readable: tutorials/attachment:Average_convs.png /home/docs/checkouts/readthedocs.org/user_builds/pulser/checkouts/569/docs/source/tutorials/optimization.nblink:760: WARNING: image file not readable: tutorials/attachment:Average_convs.png
Do these ring any bell to you?
I modified this notebook to replace Simulation by QutipEmulator and update the display of registers. I don't understand why RTD is no longer able to read the image files 😅
I modified this notebook to replace Simulation by QutipEmulator and update the display of registers. I don't understand why RTD is no longer able to read the image files 😅
This is a wild guess, but try to open it and save it with Jupyter Notebook (instead of VSCode) and commit the differences.
View / edit / reply to this conversation on ReviewNB
HGSilveri commented on 2023-09-15T09:57:22Z ----------------------------------------------------------------
The placement of the legend turned out a bit weird here, idk why it doesn't place it in the upper right corner
a-corni commented on 2023-09-15T14:39:22Z ----------------------------------------------------------------
Because I have forced it to be in the lower part of the drawing. I did this because in the simplest case (three qubits equally spaced by 5µm) the name of the qubit is up and the only room left is below (but I agree it's not great. Eventually, one solution would be to extend the size of the figures.
HGSilveri commented on 2023-09-15T14:42:23Z ----------------------------------------------------------------
Extending is a viable option. Another would be to not always show this legend. For example, if you are calling Register.draw()
,it's pretty clear that those are atoms
a-corni commented on 2023-09-15T15:34:36Z ----------------------------------------------------------------
Personally I think I would be in favour of not having it at all, and just using it for the drawing of the local quantities. Nobody questioned the difference between traps and atoms and except for the detuning map where the representation varies when it is in a Sequence or not, it does not really matter.
My second best option is to extend, that's perhaps generally a good option as the simplest registers are I find a bit small...
I reviewed the notebooks but I can't look at the outputs without the RTD build
Because I have forced it to be in the lower part of the drawing. I did this because in the simplest case (three qubits equally spaced by 5µm) the name of the qubit is up and the only room left is below (but I agree it's not great. Eventually, one solution would be to extend the size of the figures.
View entire conversation on ReviewNB
Because I have forced it to be in the lower part of the drawing. I did this because in the simplest case (three qubits equally spaced by 5µm) the name of the qubit is up and the only room left is below (but I agree it's not great. Eventually, one solution would be to extend the size of the figures.
View entire conversation on ReviewNB
Extending is a viable option. Another would be to not always show this legend. For example, if you are calling Register.draw()
,it's pretty clear that those are atoms
View entire conversation on ReviewNB
Personally I think I would be in favour of not having it at all, and just using it for the drawing of the local quantities. Nobody questioned the difference between traps and atoms and except for the detuning map where the representation varies when it is in a Sequence or not, it does not really matter.
My second best option is to extend, that's perhaps generally a good option as the simplest registers are I find a bit small...
View entire conversation on ReviewNB
Everything looks good from an aesthetic point of view, I let you have a look. However, there seems to be an error in the simulation. I start looking into it.
There was no issue with the code in the end, just the optimization had to be relaunched. It's now ready for review @HGSilveri
Looks like it's almost there :)
One question: how is
print(seq)
looking with DMMs?
Aah I was thinking of tackling it in a new issue afterwards, but I guess it's good to see it now ;)
Aah I was thinking of tackling it in a new issue afterwards, but I guess it's good to see it now ;)
As you wish, as long as we don't forget about it it's okay
Aah I was thinking of tackling it in a new issue afterwards, but I guess it's good to see it now ;)
As you wish, as long as we don't forget about it it's okay
Here is how it is shown:
Here is how it is shown:
Ok, not so bad. What about when the detuning is not constant? I think it won't show as a detuned delay in that case
Here is how it is shown:
Ok, not so bad. What about when the detuning is not constant? I think it won't show as a detuned delay in that case
As from 100->300 for dmm_0, it shows as a Pulse with 0 amplitude and Ramp from -10 to 0
As from 100->300 for dmm_0, it shows as a Pulse with 0 amplitude and Ramp from -10 to 0
Of course, don't know how I missed that, sorry...
Ok, I think we can modify it a bit to always show something like:
t: {start} -> {end} | Detuning: {detuning_waveform} | Targets: ...
What do you think?
As from 100->300 for dmm_0, it shows as a Pulse with 0 amplitude and Ramp from -10 to 0
Of course, don't know how I missed that, sorry... Ok, I think we can modify it a bit to always show something like:
t: {start} -> {end} | Detuning: {detuning_waveform} | Targets: ...
What do you think?
Sounds cool !
As from 100->300 for dmm_0, it shows as a Pulse with 0 amplitude and Ramp from -10 to 0
Of course, don't know how I missed that, sorry... Ok, I think we can modify it a bit to always show something like:
t: {start} -> {end} | Detuning: {detuning_waveform} | Targets: ...
What do you think?
Here is where I am at. I am wondering if I should not delete Detuned Delays for DMM channels (Have t: 100->300 | Detuning: -10 rad/µs | Targets: q0, q1, q2, q3
)
As from 100->300 for dmm_0, it shows as a Pulse with 0 amplitude and Ramp from -10 to 0
Of course, don't know how I missed that, sorry... Ok, I think we can modify it a bit to always show something like:
t: {start} -> {end} | Detuning: {detuning_waveform} | Targets: ...
What do you think?Here is where I am at. I am wondering if I should not delete Detuned Delays for DMM channels (Have
t: 100->300 | Detuning: -10 rad/µs | Targets: q0, q1, q2, q3
)
I was going to suggest that, I think that would be better indeed
I think that's good ?
I think that's good ?
Yes, looks great :)
All comments should be fixed now :)