qiboteam / boostvqe

Using DBI top boost VQE optimization
3 stars 1 forks source link

More models for supplemental material #36

Open marekgluza opened 3 months ago

marekgluza commented 3 months ago

In the supplemental material we will need to show more models.

I don't want to loose momentum but while @MatteoRobbiati is generating data then let's use his focus to dispatch more jobs for VQE. We can then use the stored data to 'boost' with DBI

The goal is to have a PR with ready VQE data for SM.

After each task please let me know directly so I can ping others and connect the work tasks, thanks!

gumbrich commented 3 months ago

@marekgluza Sure, I will paste the models by Monday

gumbrich commented 3 months ago

Two other models we previously discussed are:

(1) Transverse and Longitudinal Field Ising Model (TLFIM), possibly with periodic and open boundary conditions: $H_\mathrm{TLFIM} = \sum_i Zi Z{i+1} + h_x \sum_i X_i + h_z \sum_i Z_i$

See also: https://quantum-journal.org/papers/q-2022-09-29-824/pdf/

(2) Fermi-Hubbard Model: $H\mathrm{FH} = -t \sum{\langle i, j \rangle, \sigma\in{\uparrow, \downarrow}} \hat c^\dagger{i,\sigma} \hat c{j,\sigma} + U \sumi \hat n{i,\uparrow} \hat n_{i,\downarrow} - \mu \sum_i \hat n_i$

Jordan-Wigner transformation: $\hat ci^\dagger = \left [ \Pi{k=1}^{j-1} (-Z_k) \right ] S_i^+, S_i^\pm = \frac{1}{2} \left ( X_i \pm i Y_i \right )$

See also: https://arxiv.org/pdf/2312.09292 or Ashley Montanaro's works

We briefly discussed other models, but to me it seems we don't need yet another one. If at all, we could slightly modify the XXZ case we studied before and pick some other interesting cases for:

(3) XYZ Model: $H\mathrm{XYZ} =\sum{i=1} (Xi X{i+1} +\Delta_y YiY{i+1} +\Delta_z Zi Z{i+1} )$

See also: https://arxiv.org/pdf/2206.01982

marekgluza commented 3 months ago

@khanhuyengiang can you proceed with the following?

marekgluza commented 3 months ago

@Edoardo-Pedicillo @MatteoRobbiati

Johannes' comment following the meeting earlier today:

I’m still wondering what the gallery of numerical results contains, e.g. different models, number of iterations, layers, sizes, etc. It seems to me that a more heuristic work should convince with a solid number of examples. Specifically since the conjecture seems to be that you’ll “end up in yet another (better) local minimum”.

I added in the timeline an entry that we'll have to run more models, at least for the supplements. This can be as simple as model+results and then delegating the detailed discussion to the next paper. That way we keep momentum but also strengthen the energy reduction.

MatteoRobbiati commented 3 months ago

I’m still wondering what the gallery of numerical results contains, e.g. different models, number of iterations, layers, sizes, etc. It seems to me that a more heuristic work should convince with a solid number of examples. Specifically since the conjecture seems to be that you’ll “end up in yet another (better) local minimum”.

Thanks @gumbrich for the comment! :)

  1. With different models you mean e.g. the aforementioned target TLFIM, XYZ, etc.? Or you mean considering different ansatze for the VQE?
  2. About number of iterations, layers, etc, what we are doing is collected here: https://github.com/qiboteam/boostvqe/pull/33 and we are exploring many configuration, indeed, to give robustness to the numerics.

To improve the work in this direction, please, feel free to propose a list of trainings that would be appropriate to perform to achieve more robustness (so that we can check the list and take it updated).

gumbrich commented 3 months ago

@MatteoRobbiati thanks for the prompt reply, and the nice overview this morning!

Indeed, concerning 1) I meant the target Hamiltonians like TLFIM, Fermi-Hubbard and XYZ.

Concerning 2), that looks very good, thanks for pointing it out. During the meeting I felt that exactly what Stefano proposed would be very helpful, to understand better how the DBQA improvement depends upon the number of previous iterations. That would seem to help a lot in making the statement "we jump to another local minimum" more quantitative, and even check it numerically by continuing with VQE afterwards (and compare how well that works in different settings, i.e. different # of iterations and so on).

There are few other things that I thought of, e.g. what if one could find a good warm-start, which I think is a key direction for variational q. algorithms. How well does the hybrid approach improve, then? Of course one of the motivations here seems to be that it helps speeding up training without prior knowledge. I'll think a bit more what could be useful and ping you if anything comes to mind that seems reasonable. :)

marekgluza commented 3 months ago

Linking this implementation of fermions https://github.com/qiboteam/vqe-sun/blob/main/quspin_functions.py from this paper https://arxiv.org/abs/2106.15552

MatteoRobbiati commented 3 months ago

Many thanks for the suggestions @gumbrich! I will set up a table list of jobs it would be ideal to run :)