Closed pablopalafox closed 4 years ago
Are you triggering the following assert?
Closing due to inactivity, feel free to reopen in case this issue persists.
Thank you for sharing your findings, I would have suggested the same fix. :+1:
Awesome, thanks a lot!
That's really helpful. Thank!
Hi @sniklaus,
thanks a lot for your work! I managed to get the model overfit to one sample, using a batch size of 1. But when moving on to fine tuning the pretrained models and using a batch size different than 1, argument
second
in_FunctionCorrelation
is not contiguous. I guess one option is to dosecond.contiguous()
. But I was curious as to why this is happening. I've roughly spotted that functionBackward
returns a non-contiguous tensor (when batch size != 1). Let me know if this had happened to you before, or if maybe it's some issue from my side. Cheers!EDIT: in particular, it's this line:
that is causing the output of
Backward
to not be contiguous when the batch size is not 1, which makes sense, since slicing can cause this (see this thread)