Closed Thornhill-GYL closed 4 years ago
Yes I modified the code for ALBERT in the Huggingface code. You can just ignore dp_mask, i.e., do not add it to the input dict, and delete it from the return values, and it should still be able to improve the results.
OK thaks a lot.I will delete it and try again.But Is it possible to see how you modified the code of Albert?
Refer to https://github.com/zhuchen03/FreeLB/blob/master/huggingface-transformers/src/transformers/modeling_albert.py and search for dp_mask, or diff this file with the original version. Unfortunately I do not have the original version at hand...
thanks !
I used the Code for adversarial training and the problem follows: forward() got an unexpected keyword argument 'dp_masks' I wonder if you change the inside of the pretrained-model? or other staff How can I fix it? thanks!