Open marekgluza opened 3 months ago
@marekgluza Sure, I will paste the models by Monday
Two other models we previously discussed are:
(1) Transverse and Longitudinal Field Ising Model (TLFIM), possibly with periodic and open boundary conditions: $H_\mathrm{TLFIM} = \sum_i Zi Z{i+1} + h_x \sum_i X_i + h_z \sum_i Z_i$
See also: https://quantum-journal.org/papers/q-2022-09-29-824/pdf/
(2) Fermi-Hubbard Model: $H\mathrm{FH} = -t \sum{\langle i, j \rangle, \sigma\in{\uparrow, \downarrow}} \hat c^\dagger{i,\sigma} \hat c{j,\sigma} + U \sumi \hat n{i,\uparrow} \hat n_{i,\downarrow} - \mu \sum_i \hat n_i$
Jordan-Wigner transformation: $\hat ci^\dagger = \left [ \Pi{k=1}^{j-1} (-Z_k) \right ] S_i^+, S_i^\pm = \frac{1}{2} \left ( X_i \pm i Y_i \right )$
See also: https://arxiv.org/pdf/2312.09292 or Ashley Montanaro's works
We briefly discussed other models, but to me it seems we don't need yet another one. If at all, we could slightly modify the XXZ case we studied before and pick some other interesting cases for:
(3) XYZ Model: $H\mathrm{XYZ} =\sum{i=1} (Xi X{i+1} +\Delta_y YiY{i+1} +\Delta_z Zi Z{i+1} )$
See also: https://arxiv.org/pdf/2206.01982
@khanhuyengiang can you proceed with the following?
- @khanhuyengiang in the branch https://github.com/qiboteam/boostvqe/tree/VQE%2BDBI-more-models please make a notebook for each model using the SymbolicHamiltonian to create the respective Hamiltonian. As outcome, please start a PR from that branch
@Edoardo-Pedicillo @MatteoRobbiati
Johannes' comment following the meeting earlier today:
I’m still wondering what the gallery of numerical results contains, e.g. different models, number of iterations, layers, sizes, etc. It seems to me that a more heuristic work should convince with a solid number of examples. Specifically since the conjecture seems to be that you’ll “end up in yet another (better) local minimum”.
I added in the timeline an entry that we'll have to run more models, at least for the supplements. This can be as simple as model+results and then delegating the detailed discussion to the next paper. That way we keep momentum but also strengthen the energy reduction.
I’m still wondering what the gallery of numerical results contains, e.g. different models, number of iterations, layers, sizes, etc. It seems to me that a more heuristic work should convince with a solid number of examples. Specifically since the conjecture seems to be that you’ll “end up in yet another (better) local minimum”.
Thanks @gumbrich for the comment! :)
To improve the work in this direction, please, feel free to propose a list of trainings that would be appropriate to perform to achieve more robustness (so that we can check the list and take it updated).
@MatteoRobbiati thanks for the prompt reply, and the nice overview this morning!
Indeed, concerning 1) I meant the target Hamiltonians like TLFIM, Fermi-Hubbard and XYZ.
Concerning 2), that looks very good, thanks for pointing it out. During the meeting I felt that exactly what Stefano proposed would be very helpful, to understand better how the DBQA improvement depends upon the number of previous iterations. That would seem to help a lot in making the statement "we jump to another local minimum" more quantitative, and even check it numerically by continuing with VQE afterwards (and compare how well that works in different settings, i.e. different # of iterations and so on).
There are few other things that I thought of, e.g. what if one could find a good warm-start, which I think is a key direction for variational q. algorithms. How well does the hybrid approach improve, then? Of course one of the motivations here seems to be that it helps speeding up training without prior knowledge. I'll think a bit more what could be useful and ping you if anything comes to mind that seems reasonable. :)
Linking this implementation of fermions https://github.com/qiboteam/vqe-sun/blob/main/quspin_functions.py from this paper https://arxiv.org/abs/2106.15552
Many thanks for the suggestions @gumbrich! I will set up a table list of jobs it would be ideal to run :)
In the supplemental material we will need to show more models.
I don't want to loose momentum but while @MatteoRobbiati is generating data then let's use his focus to dispatch more jobs for VQE. We can then use the stored data to 'boost' with DBI
[x] @gumbrich Johannes, could you paste here by Tuesday (asap appreciated) the latex definitions of the models that you want to consider?
[x] @Edoardo-Pedicillo thanks for #45
[x] @MatteoRobbiati @Edoardo-Pedicillo , please run on TII more VQE training
The goal is to have a PR with ready VQE data for SM.
After each task please let me know directly so I can ping others and connect the work tasks, thanks!