Open LCHCurtis opened 5 years ago
No, we only keep 20% of original training set as validation set, then use grid search to find the best hyper parameters within certain range. You can easily implement it.
Thank you. I have another question. After I train the model, I run the prediction multiple times with the same dataset, but it always gives different AUCs. Would you please explain why it comes with this? Sorry, I am a beginner in this field.
could I ask how big is the difference? maybe because of random seed number.
for the ALKBH5 dataset, I run the training once, then using the same model and feed the same testing dataset for multiple times, I got the AUC ranging from 0.67-0.7
Can you add the code np.random.seed(0) torch.manual_seed(0) before run_ideepe(args) in the file to fix the seed and retrain the model and do prediction?
Problem solved. Thank you so much
Does the ideepe.py implement the cross-validation to fine tune the hyperparameter?