jiaor17 / DiffCSP

[NeurIPS 2023] The implementation for the paper "Crystal Structure Prediction by Joint Equivariant Diffusion"
MIT License
82 stars 33 forks source link

why do we use only a single test batch and multiple "T_max" ranges in `optimization.py` (?) #12

Open fedeotto opened 7 months ago

fedeotto commented 7 months ago

Hello everyone, I'd have a couple of questions regarding the diffusion code related to optimization.py (see below):

def diffusion(loader, energy, uncond, step_lr, aug):
    frac_coords = []
    num_atoms = []
    atom_types = []
    lattices = []
    input_data_list = []
    batch = next(iter(loader)).to(energy.device)

    all_crystals = []

    for i in range(1,11):
        print(f'Optimize from T={i*100}')
        outputs, _ = energy.sample(batch, uncond, step_lr = step_lr, diff_ratio = i/10, aug = aug)
        all_crystals.append(outputs)

    res = {k: torch.cat([d[k].detach().cpu() for d in all_crystals], dim=0).unsqueeze(0) for k in
        ['frac_coords', 'atom_types', 'num_atoms', 'lattices']}

    lengths, angles = lattices_to_params_shape(res['lattices'])

    return res['frac_coords'], res['atom_types'], lengths, angles, res['num_atoms']

Q1) In my understanding, here we are using only a single batch from the test loader, so not all the structures in the test set will be optimized, but only a single batch (?) Is this the desired behavior, or am I just missing something? I would instead iterate over different batches in order to match the number of structures that I want to optimize in my test set.

Q2) Why are we using multiple time ranges (1-10) to optimize the structures? This will simply lead to having 10x the structures we wanted to optimize in the original test set. Again, is this something that is needed (to iterate over multiple Tmaxs) or can I just set a fixed single value (e.g. 1000) in order to have the exact number of structures I want to optimize?

Many thanks and best regards,

Fed

jiaor17 commented 7 months ago

Hi,

Thanks for your interest! We adopt similar setting as the official codes in CDVAE for a fair comparison on the property optimization task. As described in Section 5.3 of the CDVAE paper, the process involves optimizing 100 materials. For each material, 10 candidates are generated through gradient search on the latent space. An independently trained property predictor is then utilized to select the optimal candidate from the 10 decoded materials. For our codes to achieve this task, we initiate the process with the first batch from the test set, which is configured with a batch size of 100. We extract 10 candidates from 10 different time ranges and employ the same property predictor to identify the best case.

For more flexible sampling and optimization, we have updated the codes in scripts/optimization.py. These updates include the addition of two parameters that can be adjusted by the user:

Hope the above information would help you!