Closed GregorySchwing closed 11 months ago
My guess is that with that number of atoms in a small box there is so much density overlap that you can well approximate the density using different coordinates than the true coordinates. It might be instructive to look at individual elements of the batch instead of the entire batch.
I was misunderstanding what this example did. I realize now it is mapping the original coordinates to latent space which grids to the same grid as the original coordinates.
I'm looking for functionality of transforming a grid and extracting then extracting coordinates. Is this what is happening in the example train_basic_CNN_with_PyTorch? https://gnina.github.io/libmolgrid/tutorials/train_basic_CNN_with_PyTorch.html
I imagine this problem had to be solved for gnina, but as all the code is cpp, I don't think it'd be easy for me to find the relevant code.
That sort of functionality isn't in molgrid, but it is in LiGAN, which is in python: https://github.com/mattragoza/LiGAN
thank you very much
I am interested in using the libmolgrid functionality to sample poses using pytorch. For now, I am interested in generalizing the "train_simple_cartesian_reduction" example to multi-atoms and multi-types. I have attached a jupyter notebook where all I did was change the num_atoms to 64, The true and predicted plots are shown. These are clearly wrong, but Im not sure why.
https://github.com/GregorySchwing/libmolgrid/blob/development/tutorials/simple_cartesian_multiatom.ipynb
Is there an example that does this or can you suggest a fix?