qiboteam / boostvqe

Using DBI to boost VQE optimization
4 stars 1 forks source link

Some results with SGD #33

Closed MatteoRobbiati closed 2 months ago

MatteoRobbiati commented 5 months ago

VQE training data

In the following I collect into a table the currently uploaded data obtained by training VQEs. In the table you can find some hyper-parameters, but much more informations can be extracted from the optimization_results.json file which is in each folder.

Summary table

In the following, when a list is reported it means that the target training setup has been repeated for all the elements of the list. E.g. the BFGS has been used on 8 qubits with 4, 5 and 6 layers. You can find all the results in the dedicated folders.

Qubits Layers Optimizer $\eta$ $N_{\rm epochs}$ Random seed
7 3 Adagrad 0.05 2000 [arange(1, 101, 5)]
8 [4, 5, 6] BFGS - - 42
10 [15, 20, 25, 30, 35] Adam 0.005 500 42
10 [5, 7, 9, 11, 13, 15, 17, 19] Adam 0.01 2000 42
10 20 Adam [0.1, 0.05, 0.01, 0.005, 0.001] 2000 42
10 10 Adagrad 0.05 2000 [arange(1, 101, 10)]
11 [5, 8, 10, 20] Adam 0.005 500 42
11 [20, 50] Adam 0.05 500 42
12 50 Adam 0.05 500 42
13 20 Adam 0.05 500 42

The data are organized into zipped files with names like: {Optimizer}_{nqubits}q_{nlayers}l_{learning_rate}lr_{random_seed}s.

It should be quite easy to understand which training you are referring to.

Instructions to load the architectures and play with DBQA

First, you have to update the results into your computer and unzip these files. You can update the branch by switching to it and pulling:

git checkout results
git pull origin results

Then you have to unzip them (do it manually or with the command unzip your_file_to_be_unzipped.zip).

You can now use the script you find in the extras folder. Between lines 19 and 24 of this script you can set the filepath from which you want to upload results, the training status you want to upload (namely the parameters got at some certain point of the training) and the number of DBI steps.

On top of this, please proceed with the task @marekgluza mentioned in the meeting.

MatteoRobbiati commented 5 months ago

I just uploaded some further results. Some long training of 10q and 10l model to explore the learning rate value. The results can be summarized with this plot: lr_hyperopt.pdf.

MatteoRobbiati commented 4 months ago

Important: I am moving all the files here: https://mega.nz/folder/tewlwBzI#0lW4fvTiaFD1KvSXivsn3A.

Edoardo-Pedicillo commented 2 months ago

I close this PR, since we are saving all the data on mega.

Edoardo-Pedicillo commented 2 months ago

I close this PR, since we are saving all the data on mega.