molecularsets / moses

Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models
https://arxiv.org/abs/1811.12823
MIT License
820 stars 241 forks source link

VAE enable annealing teacher forcing probability during training #7

Open lilleswing opened 5 years ago

lilleswing commented 5 years ago

The VAE doesn't have teacher forcing. The teacher forcing is really needed for larger molecules.

Original Code https://github.com/aspuru-guzik-group/chemical_vae/blob/master/chemvae/tgru_k2_gpu.py

Moses Code https://github.com/molecularsets/moses/blob/master/moses/vae/model.py#L114-L147

lilleswing commented 5 years ago

IBM has a pretty good example https://github.com/IBM/pytorch-seq2seq/blob/master/seq2seq/models/DecoderRNN.py#L108-L164

danpol commented 5 years ago

@lilleswing, the code from MOSES you've sent implements teacher forcing. Did you mean that we should add free running for training?

lilleswing commented 5 years ago

Yes I misread the code.
It is missing annealing off the teacher forcing (but that was not a component of the initial paper). The initial paper did always have teacher forcing during training and free running during sampling. It would be an improvement above the paper implementation.

danpol commented 5 years ago

Yes, we’ll add free run soon. It will probably be denoted as a separate model at the metrics table.

liujunhongznn commented 4 years ago

have you ever tested the reconstruction accuracy of VAE model? I tested the reconstruction accuracy and the performance is very bad, here is my testing code, is there any problem? thanks! `def read_smiles_csv(path): return pd.read_csv(path, usecols=['SMILES'], squeeze=True).astype(str).tolist()

if name == 'main':

parser = get_parser()
config = parser.parse_known_args()[0]
device = torch.device(config.device)

if device.type.startswith('cuda'):
    torch.cuda.set_device(device.index or 0)

model_config = torch.load(config.config_save)
model_vocab = torch.load(config.vocab_save)
model_state = torch.load(config.model_save)

model = VAE(model_vocab, model_config)
model.load_state_dict(model_state)
model = model.to(device)
model.eval()

test_data_path = 'train.csv'
test_data = random.sample(read_smiles_csv(test_data_path), 100)
NUM_DEC = 500
num = 0

for ech in tqdm(test_data):
    tensors = [model.string2tensor(ech.strip().strip("\n"), device=device)]
    z_vecs, _ = model.forward_encoder(tensors)
    res_lst = []
    for i in tqdm(range(NUM_DEC)):
        res = model.sample(n_batch=z_vecs.size(0), z=z_vecs)
        res_lst.extend(res)
    if ech in res:
        num += 1
    print("recons num: ", num)
print("reconstruct acc: ", num*1.0/100)`
danpol commented 4 years ago

Hi, @liujunhongznn Hi!

Low reconstruction quality is due to the posterior collapse that frequently happens in VAEs. Since the goal of MOSES is to produce the generative distribution as good as possible, the posterior collapse is acceptable for this task. If you want to obtain meaningful latent codes, try reducing KL divergence weight.

bokertof commented 4 years ago

@danpol Hello! Can you help me with VAE because I'm mixed up. As you before-mentioned this VAE implementation does use Teacher Forcing approach, but I don't see any loops with decoder (except val mode for generation of SMILES). Am I right that it's literally training with teacher forcing = 1? Because we don't pass previous predicted tokens (like in seq2seq models)

danpol commented 4 years ago

Hi, @bokertof! VAE in MOSES uses teacher forcing—we pass the correct token, not the sampled one.

bokertof commented 4 years ago

@danpol Ok, I got it. Can you tell me what the reason not to use the sampled tokens as input? I'm trying to implement similar net and faced an issue when model with feeding of previously predicted tokens doesn't learn whatsoever.

danpol commented 4 years ago

If you feed sampled tokens, you have to propagate the gradient through sampling (e.g., with REINFORCE), which has notoriously high variance. You could use variance reduction techniques, but it lies far from the notion of a "baseline".

bokertof commented 4 years ago

Thank you so much!