atomistic-machine-learning / schnetpack-gschnet

G-SchNet extension for SchNetPack
MIT License
49 stars 8 forks source link

Center positions on focus during generation #1

Closed Jiaran closed 2 years ago

Jiaran commented 2 years ago

Thanks for sharing your great work!

I'm confused about the generation logic. When generating, the code center positions on focus every step: R[mol_mask, :_i] -= R_focus[:, None] # center positions on focus

However, during training , the origin is the center of the whole point cloud. Usually we want to reduce gap between training vs sampling by also centering positions on the origin.

Is it because Schnet is translation invariant and thus it won't harm performance?

NiklasGebauer commented 2 years ago

You're welcome! Thanks for using our code base.

Yes, exactly, since SchNet is invariant to translation, the shift of the absolute positions does not change the extracted features or the predictions in any way. Most importantly, the auxiliary token for the center of mass has the same meaning during training and generation (it is the center of mass of the final structure).

The reason for centering structures on the focus token during generation is that it makes the code easier with respect to the evaluation of the 3d grid, which is also (by design) centered on the focus. Accordingly, we can easily broadcast a single representation of the grid over the whole batch. If we would center the positions on the origin during generation, we would need to shift the 3d grid positions in a different way for each molecule in the batch since their focus atoms are at different positions. This is of course also possible and would lead to the same results but is less convenient to write down.