kevinzakka / recurrent-visual-attention

A PyTorch Implementation of "Recurrent Models of Visual Attention"
MIT License
469 stars 124 forks source link

Found 2 bugs in declaring hidden sizes #46

Open davide-giacomini opened 1 year ago

davide-giacomini commented 1 year ago

Both bugs can be easily tested if changed the sizes in config.py.

The loc_hidden and glimpse_hidden were swapped when declaring RecurrentAttention(...), and the input_size of the CoreNetwork was wrong.