Closed VytasMule closed 3 years ago
Hi @VytasMule, thanks for your question! It could well be the case that there are differences in the PyTorch & Tensorflow implementations. The Tensorflow implementation is what we used to obtain results for the paper. The PyTorch one has shown to have similar performance though. @ahmed-alhindawi any clue/input re the difference in input size?
Hello, In order to mimic the results of the Tensorflow version, the Pytorch implementation uses a pre-trained network. PyTorch comes with Vgg/Resnet18...etc trained on a minimum size of 224x224. Hope that clarifies the differences
Thank you for a super-quick answer! I guess my only objection to size 224x224 is training time and feasibility to run it in real-time applications. Would you guys mind if I would supply your library with a PyTorch model which takes in 36x60 as an input?
Not at all @VytasMule! A pull request would be very welcome. Many thanks in advance!
@ahmed-alhindawi was faster - please see #94 which we just merged. Please give it a go @VytasMule
Lovely. Thank you all for a rapid fix. I will test it out and will report if anything goes wrong. Thanks once again!
Please correct me if I am wrong but looking at PyTorch code for eye gaze estimation, I can see that the input for both the models and data sets is 224x224. On the other hand, Tensorflow training code has 36x60 input shape for an eye patch in data generators and models.
The original data has eye patches of 36x60, so my question is why does PyTorch has different input sizes or I am just making a mistake.
Thank you in advance.