This PR implements EGNN splits (d31d1a0) and introduces several improvements and bug fixes:
store outputs of slurm jobs in a dedicated folder called job_outputs: 7078e04, ec14fbe
fix the bug that would cause us to often redownload the QM9 dataset: 1fb7cc0
fix the energy target scaling bug from EMPSN: 630d4b9, 7ce5934
fix the energy target naming bug from EMPSN: 307b4ef
adapt the learning rate scheduler to get the double descent behavior of EGNN: 4d7768b, e35e9a5
improve the help message of the --gradient_clip argument (it is deprecated): ddb1077
create two slurm scripts, one to train in an EGNN-LIKE mode and one for our model: 215b850
At this stage, we are able to closely replicate EGNN on H. We are waiting for some training runs to finish to see if this holds for all targets. If yes, we will move forward with more experiments involving TEN models.
This PR implements EGNN splits (d31d1a0) and introduces several improvements and bug fixes:
job_outputs
: 7078e04, ec14fbe--gradient_clip
argument (it is deprecated): ddb1077At this stage, we are able to closely replicate EGNN on
H
. We are waiting for some training runs to finish to see if this holds for all targets. If yes, we will move forward with more experiments involving TEN models.