idiap / fast-transformers

Pytorch library for fast transformer implementations
1.65k stars 179 forks source link

Local attention returning nan when using mask #37

Closed bratao closed 4 years ago

bratao commented 4 years ago

Hello @angeloskath and fast-transformers team.

I was testing the version on master with local attention. And apparently there is a bug when using mask. It is always gives nan values if I use mask. Using other types of attentions such as full or linear works good. If I do not use a length_mask, local attention works.

I attached a small code where you can reproduce the error.

bug_local.zip

Here is the output of Linear and Local. I'm using Python 3.8, Pytorch 1.6 and without cuda

Linear Output:
tensor([[[-4.6710e-02,  2.5698e-01,  1.6553e-01,  ..., -3.8887e-02,
           1.4760e+00, -5.5345e-01],
         [ 4.6288e-03,  7.2794e-02, -3.8738e-01,  ..., -6.5744e-01,
           1.5919e+00, -1.4824e+00],
         [ 9.8010e+00,  7.8635e-01, -5.6454e-01,  ...,  4.7453e+00,
          -2.0123e+00, -2.6727e+00],
         ...,
         [-7.6029e-04,  7.1779e-01, -7.7213e-01,  ..., -1.8993e+00,
           4.3610e+00,  2.4297e+00],
         [-1.7685e-01,  7.0581e-01, -1.1693e+00,  ..., -2.3611e+00,
           4.4412e+00,  1.8678e+00],
         [-1.1229e-01,  6.4918e-01, -8.6619e-01,  ..., -1.7007e+00,
           4.4069e+00,  1.7709e+00]],

        [[ 9.8255e+00,  9.2683e-01,  5.2733e-01,  ..., -1.9350e-01,
           1.8855e+00, -1.2510e+00],
         [ 4.5204e-01,  8.0860e-01,  8.7983e+00,  ...,  1.6027e+00,
           3.0442e+00,  1.4045e+00],
         [ 1.0541e-01,  1.2123e+00,  9.4227e+00,  ...,  2.0576e+00,
           3.2600e+00,  1.1319e+00],
         ...,
         [ 1.1826e-01,  9.7299e-01,  4.0329e-01,  ..., -3.3727e+00,
           4.6210e+00,  1.6874e+00],
         [ 7.0923e-01,  1.0117e+00,  3.8741e-01,  ..., -1.7700e+00,
           4.7787e+00,  1.8800e+00],
         [ 8.1268e-01,  2.6620e-01,  2.1668e-01,  ..., -1.9421e+00,
           4.9479e+00,  1.9297e+00]],

        [[ 4.1962e-01,  4.1222e-01,  8.9894e+00,  ...,  3.7024e+00,
           5.6398e-01,  9.1150e-01],
         [ 4.0182e-01,  7.2579e-01,  8.7252e+00,  ..., -1.3470e+00,
           1.6876e+00, -9.5219e-01],
         [ 5.8073e-01,  2.5503e-01,  9.2737e+00,  ..., -5.1724e-01,
           1.8241e+00, -1.4023e+00],
         ...,
         [ 8.9015e+00,  4.8112e-01,  5.4773e-01,  ...,  3.1965e+00,
           2.5537e-01, -3.1801e+00],
         [ 2.7294e-01,  1.3310e+00,  1.0006e+01,  ..., -1.3543e+00,
           1.3383e+00, -1.2746e+00],
         [ 1.4104e-01,  7.2310e-01,  1.0169e+01,  ..., -1.5161e+00,
           1.4063e+00, -1.4794e+00]]], grad_fn=<NativeLayerNormBackward>)
Local Output:
tensor([[[    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         ...,
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan]],

        [[    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         ...,
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan]],

        [[ 0.2749, -0.0732,  8.1727,  ...,  2.6264,  0.5370,  0.0138],
         [-0.4159,  0.2013,  8.5345,  ..., -1.4085,  1.6111, -1.8517],
         [-0.4869,  0.2738,  8.3643,  ..., -2.5074,  1.0786, -2.1240],
         ...,
         [ 8.7382, -0.1080, -0.1985,  ...,  1.4713,  0.0980, -4.0366],
         [-0.3936, -0.1394,  9.1536,  ..., -2.3590,  1.0853, -2.0395],
         [-0.1424, -0.1040,  9.6754,  ..., -1.8460,  1.4940, -1.4138]]],
       grad_fn=<NativeLayerNormBackward>)
angeloskath commented 4 years ago

Thanks! I am on it. I will add a similar piece of code to the tests as well.

Thanks for your help!