cansyl / DEEPScreen

DEEPScreen: Virtual Screening with Deep Convolutional Neural Networks Using Compound Images
109 stars 44 forks source link

Possible overfitting on the test set #13

Closed diliadis closed 3 years ago

diliadis commented 3 years ago

Hello,

I was going over the code and noticed something strange in train_deepscreen.py. More specifically, I believe there is a problem in line 172 InkedScreenshot 2021-09-10 082116_LI

The code basically checks for every training epoch the performance on the validation and test sets and keeps the epoch with the highest Matthews correlation coefficient. The final performance printed by the model is the best possible test set performance, which suggests that the model overfits the test set.

I am wondering about the rationale behind the choice, so I would appreciate it if you could share more info.

Best, Dimitrios

ahmetrifaioglu commented 3 years ago

Dear Dimitros, Thank you for your interest in DEEPScreen and pointing out this issue. Yes, this should be corrected as it may lead overfitting. PyTorch implementation of DEEPScreen is new and it is still going on. This was just a code hack for me for the datasets that have only test samples without the validation dataset. I forgot to change it and sorry for the inconvenience. I will upload the correct version.

The original DeepScreen implementation (paper version) was on tflearn. It does not have this issue so you can have look from the link below:

https://github.com/cansyl/DEEPScreen/blob/master/bin/trainDEEPScreen.py

We are planning to release the Pytorch version soon. In its current state; datasets have been updated (ChEMBL v28) for the PyTorch version, different bioactivity thresholds is being tried, initial implementation with a simple CNN model has finished. However, it is still under development and another colleague is now working on these improvements. We are planning to release the new implementation, models and a hopefully web server soon.

-- Ahmet

diliadis commented 3 years ago

Dear Ahmet, Thank you for the quick response and the clarification. I am quite interested in the multi-task formulation of the problem (basically treat the different protein targets as tasks).

Screenshot 2021-09-10 144601

By doing that, I can compare the single-task models (what is done in this implementation and the corresponding paper) with an approach that trains with all targets simultaneously. The current train-val-test splits that you offer have compound overlaps between the train and test sets of different targets (for example a compound can be in the training set of target A and the test set of target B). So, if I want to create a multi-task version of the dataset I end up with compounds that show up in both the train and test sets. Does it make sense to create a dataset (see right figure) by combining all the interactions and then randomly split at a specific compound (see right figure)??

Thanks, Dimitrios

ahmetrifaioglu commented 3 years ago

Dear Dimitrios,

Yes, this makes sense and it is one way to go.

Another way would be separating a set of compounds as test set and using them only for testing to see if the model can predict completely unseen compounds.

Best