Maxpa1n / gcplm-kgc

ACL2023-Multilingual Knowledge Graph Completion from Pretrained Language Models with Knowledge Constraints
5 stars 0 forks source link

询问参数 #1

Closed Qu1n-22 closed 4 months ago

Qu1n-22 commented 11 months ago

您好 image 这个sh文件中的最后一个参数的77777777是什么作用? 以及,我在删除这个参数后,遇到了 If you want to use XLMRobertaLMHeadModel as a standalone, add is_decoder=True. You are resizing the embedding layer without providing a pad_to_multiple_of parameter. This means that the new embedding dimension will be 250006. This might induce some performance reduction as Tensor Cores will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc 这个问题,似乎导致了 image 感谢! 过程.txt

Maxpa1n commented 11 months ago

This problem is caused by the update of the transformers library. The version I use is as follows tokenizers 0.10.3 torch 1.10.0+cu113 torchaudio 0.10.0+cu113 torchvision 0.11.1+cu113 transformers 4.13.0