Closed captainvera closed 4 years ago
The changes are very minor, they just make sure that all tensors share the same device. This avoids pytorch crashing from making the mask &= (..) operation with a (just created) cpu tensor.
mask &= (..)
The changes are very minor, they just make sure that all tensors share the same device. This avoids pytorch crashing from making the
mask &= (..)
operation with a (just created) cpu tensor.