Ucas-HaoranWei / Vary-toy

Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)
565 stars 41 forks source link

Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" #4

Closed zhangxyzte closed 5 months ago

zhangxyzte commented 5 months ago

You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "/home/Vary-toy-code/Vary-master/vary/demo/run_qwen_vary.py", line 129, in eval_model(args) File "/home/Vary-toy-code/Vary-master/vary/demo/run_qwen_vary.py", line 45, in eval_model model = varyQwenForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, device_map='cuda', trust_remote_code=False) File "/home/pyvenv_toy/toy/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained ) = cls._load_pretrained_model( File "/home/pyvenv_toy/toy/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/home/pyvenv_toy/toy/lib/python3.8/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/home/pyvenv_toy/toy/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.

Ucas-HaoranWei commented 5 months ago

Maybe you have built the original vary? Please rebuild the vary-toy.

Attect commented 5 months ago

Maybe you have built the original vary? Please rebuild the vary-toy.

在WSL2里跑的,碰到了同样的问题,看文档里没有build的说明,且步骤中与原版vary无明显区分(比如cd /path/to/vary而不是cd /path/to/Vary-toy)。 修改了代码中的路径重新pip install e .(这步是不是build?) 仍出现:

You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 126, in <module>
    eval_model(args)
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 43, in eval_model
    model = varyQwenForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, device_map='cuda', trust_remote_code=True)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.
Attect commented 5 months ago

我觉得应该是说明理解出现了问题。疑惑在于 model-name具体应该是什么? 编码中的模型路径,应该写Vary-toy weights的还是CLIP-VIT-L的?还是都不是?因为我都试了都不行,而且看起来只配置了一项,另一项在哪里配置? 代码中多处出现了模型路径,是都需要修改还是只修改部分?不过我部分修改和全修改我也试了也不行

Attect commented 5 months ago

这是更完整的错误输出:(请原谅我 们这些只会根据文档步骤固定操作,并不知道运行原理掌握自行修改和变通纠错的人)

/mnt/e/Vary-toy/Vary-master$ python3 vary/demo/run_qwen_vary.py --model-name ../Vary-toy --image-file /mnt/f/00001.png
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
You are using a model of type mmgpt to instantiate a model of type vary. This is not supported for all configurations of models and can yield errors.
You are using a model of type mmgpt to instantiate a model of type clip_vision_model. This is not supported for all configurations of models and can yield errors.
Some weights of CLIPVisionModel were not initialized from the model checkpoint at /mnt/e/Vary-toy/Vary-toy and are newly initialized: ['vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 126, in <module>
    eval_model(args)
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 43, in eval_model
    model = varyQwenForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, device_map='cuda', trust_remote_code=True)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.
Ucas-HaoranWei commented 5 months ago

如果你之前下载过Vary,并且跑通了,那么下载过Vary-toy后,需要进入Vary-toy文件夹重新 pip install -e . ; 错误中的2048是Vary的通道,1024是Vary-toy,编译错误会导致报错

Attect commented 5 months ago

如果你之前下载过Vary,并且跑通了,那么下载过Vary-toy后,需要进入Vary-toy文件夹重新 pip install -e . ; 错误中的2048是Vary的通道,1024是Vary-toy,编译错误会导致报错

我试了在Vary-toy中重新执行pip install -e,输出如下:

/mnt/e/Vary-toy/Vary-master$ pip install e .
Processing /mnt/e/Vary-toy/Vary-master
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: e in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (1.4.5)
Requirement already satisfied: einops in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.6.1)
Requirement already satisfied: markdown2[all] in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (2.4.12)
Requirement already satisfied: numpy in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (1.26.3)
Requirement already satisfied: requests in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (2.31.0)
Requirement already satisfied: sentencepiece in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.1.99)
Requirement already satisfied: tokenizers>=0.12.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.13.3)
Requirement already satisfied: torch in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (2.1.2)
Requirement already satisfied: torchvision in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.16.2)
Requirement already satisfied: wandb in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.16.2)
Requirement already satisfied: shortuuid in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (1.0.11)
Requirement already satisfied: httpx==0.24.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.24.0)
Requirement already satisfied: deepspeed==0.12.3 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.12.3)
Requirement already satisfied: peft==0.4.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.4.0)
Requirement already satisfied: albumentations in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (1.3.1)
Requirement already satisfied: opencv-python in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (4.9.0.80)
Requirement already satisfied: tiktoken in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.5.2)
Requirement already satisfied: accelerate==0.24.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.24.1)
Requirement already satisfied: transformers==4.32.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (4.32.1)
Requirement already satisfied: bitsandbytes==0.41.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.41.0)
Requirement already satisfied: scikit-learn==1.2.2 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (1.2.2)
Requirement already satisfied: einops-exts==0.0.4 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.0.4)
Requirement already satisfied: timm==0.6.13 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.6.13)
Requirement already satisfied: gradio-client==0.2.9 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from vary==0.1.0) (0.2.9)
Requirement already satisfied: packaging>=20.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from accelerate==0.24.1->vary==0.1.0) (23.2)
Requirement already satisfied: psutil in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from accelerate==0.24.1->vary==0.1.0) (5.9.8)
Requirement already satisfied: pyyaml in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from accelerate==0.24.1->vary==0.1.0) (6.0.1)
Requirement already satisfied: huggingface-hub in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from accelerate==0.24.1->vary==0.1.0) (0.20.3)
Requirement already satisfied: hjson in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (3.1.0)
Requirement already satisfied: ninja in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (1.11.1.1)
Requirement already satisfied: py-cpuinfo in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (9.0.0)
Requirement already satisfied: pydantic in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (2.5.3)
Requirement already satisfied: pynvml in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (11.5.0)
Requirement already satisfied: tqdm in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from deepspeed==0.12.3->vary==0.1.0) (4.66.1)
Requirement already satisfied: fsspec in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from gradio-client==0.2.9->vary==0.1.0) (2023.12.2)
Requirement already satisfied: typing-extensions in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from gradio-client==0.2.9->vary==0.1.0) (4.9.0)
Requirement already satisfied: websockets in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from gradio-client==0.2.9->vary==0.1.0) (12.0)
Requirement already satisfied: certifi in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpx==0.24.0->vary==0.1.0) (2023.11.17)
Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpx==0.24.0->vary==0.1.0) (0.17.3)
Requirement already satisfied: idna in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpx==0.24.0->vary==0.1.0) (3.6)
Requirement already satisfied: sniffio in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpx==0.24.0->vary==0.1.0) (1.3.0)
Requirement already satisfied: safetensors in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from peft==0.4.0->vary==0.1.0) (0.4.2)
Requirement already satisfied: scipy>=1.3.2 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-learn==1.2.2->vary==0.1.0) (1.12.0)
Requirement already satisfied: joblib>=1.1.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-learn==1.2.2->vary==0.1.0) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-learn==1.2.2->vary==0.1.0) (3.2.0)
Requirement already satisfied: filelock in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from transformers==4.32.1->vary==0.1.0) (3.13.1)
Requirement already satisfied: regex!=2019.12.17 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from transformers==4.32.1->vary==0.1.0) (2023.12.25)
Requirement already satisfied: sympy in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (1.12)
Requirement already satisfied: networkx in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (3.2.1)
Requirement already satisfied: jinja2 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (3.1.3)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.0.106)
Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (2.18.1)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (12.1.105)
Requirement already satisfied: triton==2.1.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torch->vary==0.1.0) (2.1.0)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->vary==0.1.0) (12.3.101)
Requirement already satisfied: scikit-image>=0.16.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from albumentations->vary==0.1.0) (0.22.0)
Requirement already satisfied: qudida>=0.0.4 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from albumentations->vary==0.1.0) (0.0.4)
Requirement already satisfied: opencv-python-headless>=4.1.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from albumentations->vary==0.1.0) (4.9.0.80)
Requirement already satisfied: pygments>=2.7.3 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from markdown2[all]->vary==0.1.0) (2.17.2)
Requirement already satisfied: wavedrom in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from markdown2[all]->vary==0.1.0) (2.0.3.post3)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from requests->vary==0.1.0) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from requests->vary==0.1.0) (2.1.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from torchvision->vary==0.1.0) (10.2.0)
Requirement already satisfied: Click!=8.0.0,>=7.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (8.1.7)
Requirement already satisfied: GitPython!=3.1.29,>=1.0.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (3.1.41)
Requirement already satisfied: sentry-sdk>=1.0.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (1.39.2)
Requirement already satisfied: docker-pycreds>=0.4.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (0.4.0)
Requirement already satisfied: setproctitle in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (1.3.3)
Requirement already satisfied: setuptools in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (68.2.2)
Requirement already satisfied: appdirs>=1.4.3 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (1.4.4)
Requirement already satisfied: protobuf!=4.21.0,<5,>=3.19.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wandb->vary==0.1.0) (4.25.2)
Requirement already satisfied: six>=1.4.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from docker-pycreds>=0.4.0->wandb->vary==0.1.0) (1.16.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from GitPython!=3.1.29,>=1.0.0->wandb->vary==0.1.0) (4.0.11)
Requirement already satisfied: h11<0.15,>=0.13 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpcore<0.18.0,>=0.15.0->httpx==0.24.0->vary==0.1.0) (0.14.0)
Requirement already satisfied: anyio<5.0,>=3.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from httpcore<0.18.0,>=0.15.0->httpx==0.24.0->vary==0.1.0) (4.2.0)
Requirement already satisfied: imageio>=2.27 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-image>=0.16.1->albumentations->vary==0.1.0) (2.33.1)
Requirement already satisfied: tifffile>=2022.8.12 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-image>=0.16.1->albumentations->vary==0.1.0) (2023.12.9)
Requirement already satisfied: lazy_loader>=0.3 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from scikit-image>=0.16.1->albumentations->vary==0.1.0) (0.3)
Requirement already satisfied: MarkupSafe>=2.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from jinja2->torch->vary==0.1.0) (2.1.4)
Requirement already satisfied: annotated-types>=0.4.0 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from pydantic->deepspeed==0.12.3->vary==0.1.0) (0.6.0)
Requirement already satisfied: pydantic-core==2.14.6 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from pydantic->deepspeed==0.12.3->vary==0.1.0) (2.14.6)
Requirement already satisfied: mpmath>=0.19 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from sympy->torch->vary==0.1.0) (1.3.0)
Requirement already satisfied: svgwrite in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from wavedrom->markdown2[all]->vary==0.1.0) (1.4.3)
Requirement already satisfied: exceptiongroup>=1.0.2 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from anyio<5.0,>=3.0->httpcore<0.18.0,>=0.15.0->httpx==0.24.0->vary==0.1.0) (1.2.0)
Requirement already satisfied: smmap<6,>=3.0.1 in /home/attect/anaconda3/envs/vary/lib/python3.10/site-packages (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb->vary==0.1.0) (5.0.1)
Building wheels for collected packages: vary
  Building wheel for vary (pyproject.toml) ... done
  Created wheel for vary: filename=vary-0.1.0-py3-none-any.whl size=155411 sha256=ebaaa09e3623ec36e530357eab35e590f86aa7d08595bb36199247ae24d2e351
  Stored in directory: /home/attect/.cache/pip/wheels/e8/86/6f/9816de1c81530479f2bbee873d19fa451f6ea15b2f5f4a8815
Successfully built vary
Installing collected packages: vary
  Attempting uninstall: vary
    Found existing installation: vary 0.1.0
    Uninstalling vary-0.1.0:
      Successfully uninstalled vary-0.1.0
Successfully installed vary-0.1.0

再次执行后最终错误仍为

ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.

或许还需要其它步骤来清除Vary? 之所以会下载Vary主要是因为步骤中说了Note:The Vary-toy is based on Vary, if you install the [Vary](https://github.com/Ucas-HaoranWei/Vary), you can skip some steps, e.g., 3.就以为可以使用Vary直接代替于是就先去弄了Vary想跳过步骤

zhangxyzte commented 5 months ago

Maybe you have built the original vary? Please rebuild the vary-toy.

在WSL2里跑的,碰到了同样的问题,看文档里没有build的说明,且步骤中与原版vary无明显区分(比如cd /path/to/vary而不是cd /path/to/Vary-toy)。 修改了代码中的路径重新pip install e .(这步是不是build?) 仍出现:

You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 126, in <module>
    eval_model(args)
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 43, in eval_model
    model = varyQwenForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, device_map='cuda', trust_remote_code=True)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.

大概率是py环境和权重路径不正确,你可以先配置vary模型,参考https://github.com/Ucas-HaoranWei/Vary/issues/53

Attect commented 5 months ago

Maybe you have built the original vary? Please rebuild the vary-toy.

在WSL2里跑的,碰到了同样的问题,看文档里没有build的说明,且步骤中与原版vary无明显区分(比如cd /path/to/vary而不是cd /path/to/Vary-toy)。 修改了代码中的路径重新pip install e .(这步是不是build?) 仍出现:

You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 126, in <module>
    eval_model(args)
  File "/mnt/e/Vary-toy/Vary-master/vary/demo/run_qwen_vary.py", line 43, in eval_model
    model = varyQwenForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, device_map='cuda', trust_remote_code=True)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/home/attect/anaconda3/envs/vary/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([1024]) in "class_embedding" (which has shape torch.Size([2048])), this look incorrect.

大概率是py环境和权重路径不正确,你可以先配置vary模型,参考Ucas-HaoranWei/Vary#53

可以了,感谢。 希望作者的README里能把这写得更详细一些,估计还会有不少人碰到

userandpass commented 5 months ago

能发一下都改了哪里吗,我下载了 Vary-toy weights,放到了/data/toy_weights下面,然后把vary_toy_qwen1_8.py里面的模型路径改成了这个,执行这个命令:python vary/demo/run_qwen_vary.py --model-name /data/toy_weights/ --image-file 3.png,会报你截图的那个错,我把run_qwen_vary.py里面的image_processor = CLIPImageProcessor.from_pretrained("/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/", torch_dtype=torch.float16)这个路径改成/data/toy_weights/或者/cache/vit-large-patch14/都不行,/cache/vit-large-patch14/里面有之前配置vary时下载的模型

zhangxyzte commented 5 months ago

能发一下都改了哪里吗,我下载了 Vary-toy weights,放到了/data/toy_weights下面,然后把vary_toy_qwen1_8.py里面的模型路径改成了这个,执行这个命令:python vary/demo/run_qwen_vary.py --model-name /data/toy_weights/ --image-file 3.png,会报你截图的那个错,我把run_qwen_vary.py里面的image_processor = CLIPImageProcessor.from_pretrained("/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/", torch_dtype=torch.float16)这个路径改成/data/toy_weights/或者/cache/vit-large-patch14/都不行,/cache/vit-large-patch14/里面有之前配置vary时下载的模型

vary_toy_qwen1_8.py这个路径最后我没改,把vit-large-patch14权重放到原始/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/下,或者重配python环境再试试

userandpass commented 5 months ago

能发一下都改了哪里吗,我下载了 Vary-toy weights,放到了/data/toy_weights下面,然后把vary_toy_qwen1_8.py里面的模型路径改成了这个,执行这个命令:python vary/demo/run_qwen_vary.py --model-name /data/toy_weights/ --image-file 3.png,会报你截图的那个错,我把run_qwen_vary.py里面的image_processor = CLIPImageProcessor.from_pretrained("/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/", torch_dtype=torch.float16)这个路径改成/data/toy_weights/或者/cache/vit-large-patch14/都不行,/cache/vit-large-patch14/里面有之前配置vary时下载的模型

vary_toy_qwen1_8.py这个路径最后我没改,把vit-large-patch14权重放到原始/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/下,或者重配python环境再试试

可以了,谢谢

lht1605766283 commented 5 months ago

能发一下都改了哪里吗,我下载了 Vary-toy weights,放到了/data/toy_weights下面,然后把vary_toy_qwen1_8.py里面的模型路径改成了这个,执行这个命令:python vary/demo/run_qwen_vary.py --model-name /data/toy_weights/ --image-file 3.png,会报你截图的那个错,我把run_qwen_vary.py里面的image_processor = CLIPImageProcessor.from_pretrained("/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/", torch_dtype=torch.float16)这个路径改成/data/toy_weights/或者/cache/vit-large-patch14/都不行,/cache/vit-large-patch14/里面有之前配置vary时下载的模型

vary_toy_qwen1_8.py这个路径最后我没改,把vit-large-patch14权重放到原始/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/下,或者重配python环境再试试

可以了,谢谢

您好,请问您是重配python环境成功了,还是把vit-large-patch14权重放到原始/data/hypertext/ucaswei/cache/vit-large-patch14/vit-large-patch14/下成功了