Closed anj1 closed 1 year ago
No, it is not that the masks are broken. It is just no such a thing of "mask with all false
". Attention with softmax
is using really small value to remove the masked entries. In the case that everything is masked would have uniform attention score of the small value and thus everything is attent uniformly. The two lines are the same because the random initialized Transformer would usually attent uniformly. If you really need that behavior, you can use the component in NeuralAttentionlib to create new attention operator and call the Transformer constructor with that operator.
The mask with all zeros is just an illustrative example to show the issue with the mask application logic.You can try with other kinds of masks and verify that the results are not as expected.
There are tests that make sure the masks work as expected. Many model won't work if the CausalMask
doesn't being applied. Another guess of your observation is the uniform attention score caused by the random initialization. You can try changing the distribution of the weights and rerun the model to see if the value are still the same. If you are pretty sure the mask is broken, please give a MWE with non-trivial masks (at least not mask with all zeros, as explained above).
There are no tests that CausalMask
actually works correctly, neither here nor in NeuralAttentionlib. In NeuralAttentionlib there are tests that CausalMask
works in returning the correct value, but this is completely separate from determining if CausalMask
actually works as intended during the transformer attention update.
Note that it is quite possible for the CausalMask
to be failing in properly masking decoder tokens yet for the huggingface examples to still work.
Note that it is quite possible for the CausalMask to be failing in properly masking decoder tokens yet for the huggingface examples to still work.
Again, you'll need to provide an example to make this discussion concrete.
The supported models are tested against huggingface transformer with the validation code in example folder and we do observe that missing masks would have huge impact on the numerical outputs.
This problem could either be in this repo or NeuralAttentionlib, but I'm posting it here since the example using Transformers.jl is easier to run and more informative.
Basically, the implementation of all attention masks is broken, at least when using the default setup. For example, if one makes two masks, one with all
true
and another with allfalse
, one gets exactly the same output! You can verify that the output of the last two lines here is the same.My hypothesis is that this is due to the default
GenericMaskOp
being+
. Either way, it makes the package unusable.