Reimplementation of polarizable interaction neural network (PaiNN) in jax. Original work by Kristof Schütt, Oliver Unke and Michael Gastegger.
pip install git+https://github.com/gerkone/painn-jax.git
Or clone this repository and build locally
git clone https://github.com/gerkone/painn-jax
cd painn-jax
python -m pip install -e .
Upgrade jax
to the gpu version
pip install --upgrade "jax[cuda]==0.4.8" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
This implementation is validated on QM9 on the dipole moment target. The results are shown in the table below. The timings are remeasured on a single GPU (Quadro RTX 4000) and the results are compared to the ones reported in the original paper.
MSE | Inference [ms] | |
---|---|---|
torch (original) | 0.012 | 163.23 |
jax (ours) | 0.014 | 8.42* |
* padded (naive)
NOTE: The validation is not well written and is quite convoluted since it uses the QM9 dataset from schnetpack. It is here only to compare the performance of the two implementations.
NeuralNetworkPotential
for readout and pooling. Here the model is self-contained in the painn_jax.PaiNN
class, meaning that readout/pooling is parametric and is done directly inside PaiNN.This implementation of PaiNN itself is almost a minimal translation of the official torch implementation included in schnetpack.