issues
search
AMLab-Amsterdam
/
AttentionDeepMIL
Implementation of Attention-based Deep Multiple Instance Learning in PyTorch
MIT License
833
stars
189
forks
source link
why is tanh and sigmoig used in attention and not relu?
#26
Open
sri9s
opened
2 years ago
sri9s
commented
2 years ago
wondering why relu was not used instead?
wondering why relu was not used instead?