Open danasko opened 5 years ago
Hi @dandys2 , my experience says that different DL frameworks usually conduct to different results even using the same parameter values. So, my suggestion is exploring different values for the optimization parameters, even dropout, to reduce overfitting, that could be what it is happening.
Thanks for the reply, that is exactly what I was wondering, if different frameworks can lead to different results in this context.. I'll try to play around with optimization and regularization parameters.
Im trying to implement your approach in keras, and while the results on ITOP dataset are relatively fair on validation set (obtained by splitting the train set), I can get the average error on test set lower than ~0.08m ... Do you think this can be caused by high variability between train and test set? Or does it seem more like the regularization is poor? I'm using the same dropout and regularization as stated in the paper, including the same alpha parameter, so I'm running out of ideas. Any help would be appreciated, thanks.