Closed mikelee-dev closed 1 year ago
Hi, thanks for your interest! What we called 3 layers in the paper is what's already in the repository, since we include the "Input" layer in that count, and the version in the repository is what we used for the experiments. Although you're right there are not 3 1,000-dimensional layers... apologies for the confusion! I guess that was an oversight when describing our model.
It sounds like adding an additional layer was helpful, though, which is good to know. I appreciate your effort in trying to improve the property predictor, and please keep me updated if you find any additional improvements, I'd be very interested to know!
Thanks for the quick feedback! I will keep you in the loop in case any substantial improvements are made
Hi!
I was wondering if the default hyperparameters specified in the repository are the ones you determined after hyperparameter optimization. In the LIMO paper, it is written that hyperparameter optimization was performed for the property predictors predicting penalized logp, and reused for all other property predictors. I would like to double check because the paper also mentions the property predictors all have 3, linear 1000 dimensional layers, while in the repository there are 2, 1000 dimensional layers.
To match the publication, I have added 1 more 1000 dimensional linear layer and I retrained the property predictor for predicting penalized logp on 100,000 molecules, using the current default hyperparameters specified in the repository, and achieved an R-value of 0.58.
Basically, I would like work on improving the property predictors so I just wanted to double check if this is the expected performance for the current baseline model, before trying any further hyperparameter or model architecture changes.