wengong-jin / icml18-jtnn

Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)
MIT License
509 stars 190 forks source link

RuntimeError in molopt/pretrain.py #24

Closed MinkyuHa closed 5 years ago

MinkyuHa commented 6 years ago

Dear Wengong Jin

I'd like to ask your help about molopt during running pretrain.py I have successfully done all of example in molopt with data/train.txt , data/vocab.txt, data/train.logP-SA.

However RuntimeError has occurred with my own training dataset , vocab generated with python ../jtnn/mol_tree.py < my_dataset.txt and my own logP property file.

It seems to be wrong dimension during node aggregation. What's your opinion about this issue ?

Best Regards, Minkyu Ha

( environment is same with you. python 2.7, cuda 8.0, pytorch 0.3.1)

Model #Params: 4271K Traceback (most recent call last): File "pretrain.py", line 69, in loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0) File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, *kwargs) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 76, in forward tree_mess, tree_vec, mol_vec = self.encode(mol_batch) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 57, in encode tree_mess,tree_vec = self.jtnn(root_batch) File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(input, **kwargs) File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtnn_enc.py", line 62, in forward cur_h_nei = torch.cat(cur_h_nei, dim=0).view(-1,MAX_NB,self.hidden_size) RuntimeError: invalid argument 2: size '[-1 x 8 x 420]' is invalid for input with 144900 elements at /opt/conda/conda-bld/pytorch_1523240155148/work/torch/lib/TH/THStorage.c:37

XiuHuan-Yap commented 5 years ago

Hi @MinkyuHa , try increasing MAX_NB global parameter in jtnn_dec.py and jtnn_enc.py.

I increased it from 8 to 32. Note that this will increase GPU memory usage.

minstar commented 4 years ago

I've got the same issue when I tried to start training with my own dataset. However, increasing global parameter MAX_NB doesn't fit in my case. Have anyone solve this issue in another way?