jeonsworld / ViT-pytorch

Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
MIT License
1.95k stars 374 forks source link

KeyError: 'Transformer/encoderblock_0\\MultiHeadDotProductAttention_1/query\\kernel is not a file in the archive' #11

Closed tianle-BigRice closed 3 years ago

tianle-BigRice commented 3 years ago

when i used code,the error occurs error location:

 models\modeling.py", line 195, in load_from
query_weight = np2th(weights[pjoin(ROOT, ATTENTION_Q, "kernel")]).view(self.hidden_size, self.hidden_size).t()
File "d:\Anaconda3\lib\site-packages\numpy\lib\npyio.py", line 259, in __getitem__
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'Transformer/encoderblock_0\\MultiHeadDotProductAttention_1/query\\kernel is not a file in the archive'

I would like to ask where should I put this ViT-H_14.npz ? I created a checkpint folder and just put the ViT-H_14.npz in there,but I got this error。 the INFO:01/12/2021 19:51:55 - INFO - models.modeling - load_pretrained: resized variant: torch.Size([1, 257, 1280]) to torch.Size([1, 730, 1280]) my input: imgsize(384*384),batch.size(64){train.batch=eval.batch}. Is there anything I haven't modified?

xcyao00 commented 3 years ago

You may change pjoin(ROOT, ATTENTION_Q, 'kernel') to '/'.join(ROOT, ATTENTION_Q, 'kernel')

tianle-BigRice commented 3 years ago

You may change pjoin(ROOT, ATTENTION_Q, 'kernel') to '/'.join(ROOT, ATTENTION_Q, 'kernel')

I find that OS is different under Windows and Linux. Thank you for your help.

GraceKafuu commented 3 years ago

For windows : [ROOT + "/" + ATTENTION_Q + "/" + "kernel"] This work!

Amir9663 commented 7 months ago

For windows : [ROOT + "/" + ATTENTION_Q + "/" + "kernel"] This work!

Hi sir. Thanks a lot to your help.

robintux commented 1 month ago

I made the modifications indicated above and I still get errors. In windows 11 (torch : '2.4.1+cu121' y torchvision : '0.19.1+cu121' ) Change this

            query_weight = np2th(weights[pjoin(ROOT, ATTENTION_Q, "kernel")]).view(self.hidden_size, self.hidden_size).t()
            key_weight = np2th(weights[pjoin(ROOT, ATTENTION_K, "kernel")]).view(self.hidden_size, self.hidden_size).t()
            value_weight = np2th(weights[pjoin(ROOT, ATTENTION_V, "kernel")]).view(self.hidden_size, self.hidden_size).t()
            out_weight = np2th(weights[pjoin(ROOT, ATTENTION_OUT, "kernel")]).view(self.hidden_size, self.hidden_size).t()

            query_bias = np2th(weights[pjoin(ROOT, ATTENTION_Q, "bias")]).view(-1)
            key_bias = np2th(weights[pjoin(ROOT, ATTENTION_K, "bias")]).view(-1)
            value_bias = np2th(weights[pjoin(ROOT, ATTENTION_V, "bias")]).view(-1)
            out_bias = np2th(weights[pjoin(ROOT, ATTENTION_OUT, "bias")]).view(-1)

by this code :

            query_weight = np2th(weights[ROOT + "/" + ATTENTION_Q, "/"+"kernel"]).view(self.hidden_size, self.hidden_size).t()
            key_weight = np2th(weights[ROOT + "/" + ATTENTION_K, "/"+"kernel"]).view(self.hidden_size, self.hidden_size).t()
            value_weight = np2th(weights[ROOT + "/" + ATTENTION_V, "/"+"kernel"]).view(self.hidden_size, self.hidden_size).t()
            out_weight = np2th(weights[ROOT + "/" + ATTENTION_OUT, "/"+"kernel"]).view(self.hidden_size, self.hidden_size).t()

            query_bias = np2th(weights['/'.join(ROOT, ATTENTION_Q, 'bias')]).view(-1)
            key_bias = np2th(weights['/'.join(ROOT, ATTENTION_K, 'bias')]).view(-1)
            value_bias = np2th(weights['/'.join(ROOT, ATTENTION_V, 'bias')]).view(-1)
            out_bias = np2th(weights['/'.join(ROOT, ATTENTION_OUT, 'bias')]).view(-1)

and I get the following traceback

Traceback (most recent call last):

  File ~\anaconda3\envs\Prueba3Transformers\lib\site-packages\spyder_kernels\customize\utils.py:209 in exec_encapsulate_locals
    exec_fun(compile(code_ast, filename, "exec"), globals)

  File c:\users\lenovo\documents\vision-transformer-ra1ph2\pretrained_vit.py:484
    model.load_from(np.load("ViT-B_16.npz"))

  File c:\users\lenovo\documents\vision-transformer-ra1ph2\pretrained_vit.py:464 in load_from
    unit.load_from(weights, n_block=uname)

  File c:\users\lenovo\documents\vision-transformer-ra1ph2\pretrained_vit.py:333 in load_from
    query_weight = np2th(weights[ROOT + "/" + ATTENTION_Q, "/"+"kernel"]).view(self.hidden_size, self.hidden_size).t()

  File ~\anaconda3\envs\Prueba3Transformers\lib\site-packages\numpy\lib\npyio.py:263 in __getitem__
    raise KeyError(f"{key} is not a file in the archive")

KeyError: "('Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query', '/kernel') is not a file in the archive"

Any idea what my mistake could be?