Hi,
I want to run the sample file on the provided model under the fast_molvae directory. But I found that the size of vocabulary in the mose directory may not match with the provided model. And I got the error below. The size of the learned embedding in the mode is 800 which indicates that that model is built on a vocab of size 800.
I'm not sure whether the problem is because of vocabulary, but if it is, can you provide that vocab file?
Thanks.
le.py\", line 845, in load_state_dict
self.__class__.__name__, \"\n\t\".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for JTNNVAE:
size mismatch for jtnn.embedding.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
size mismatch for decoder.embedding.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
size mismatch for decoder.W_o.bias: copying a param with shape torch.Size([531]) from checkpoint, the shape in current model is torch.Size([800]).
size mismatch for decoder.W_o.weight: copying a param with shape torch.Size([531, 450]) from checkpoint, the shape in current model is torch.Size([800, 450]).
Hi, I want to run the sample file on the provided model under the fast_molvae directory. But I found that the size of vocabulary in the mose directory may not match with the provided model. And I got the error below. The size of the learned embedding in the mode is 800 which indicates that that model is built on a vocab of size 800.
I'm not sure whether the problem is because of vocabulary, but if it is, can you provide that vocab file?
Thanks.