Open Jialiang-Lu opened 4 years ago
Hi, Thanks for asking! We used Adam optimizer with default parameters as in Keras's API (learning rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-7). The batch size was 32. I have lost the exact number of epochs, but anything below 200 should suffice. As far as I remember we did not add regularization term to the loss function.
Hi,
Really impressive work and many thanks for making it open-source. I was trying to replicate your model by re-training using another dataset, but I never reached comparable with your published pre-trained weights. While I understand that you might not be able to share your training data, could you please reveal some of the hyperparameters you used for training, e.g. learning rate, optimizer, batch size, epochs, regularization etc. (augmentation was kindly described in the paper).
Thank you