Hi,
I have got another question: Would you mind providing the seed for splitting the benchmarking datasets (or the splits themselves)?
And also: Did you run all the benchmarkings of the other models on your paper yourself or did you gather the results from the corresponding papers (Table1 and Table2)?
All the benchmarks from MoleculeNet (except for QM9) are split by scaffold, which gives you a deterministic split for each database, unlike random splitting. For QM9, we didn't pick a specific seed for splitting. Each individual run uses a different random seed.
We borrow the results from other literature if available and run experiments on our side otherwise.
Hi, I have got another question: Would you mind providing the seed for splitting the benchmarking datasets (or the splits themselves)? And also: Did you run all the benchmarkings of the other models on your paper yourself or did you gather the results from the corresponding papers (Table1 and Table2)?
Thanks!