THUDM / GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Apache License 2.0
7.66k stars 607 forks source link

int4 docker env: RuntimeError: shape '[24, 3, 128]' is invalid for input of size 4608 #120

Open lhray opened 1 year ago

lhray commented 1 year ago

int4 quantization, using docker env. error: RuntimeError: shape '[24, 3, 128]' is invalid for input of size 4608

=============== Arguments =============== layer_num: 70 head_num: 96 size_per_head: 128 vocab_size: 150528 rotary_embedding_dim: 64 tensor_para_size: 4 pipeline_para_size: 1 ckpt_path: /checkpoints lib_path: ./lib start_id: 150004 end_id: 150001 max_seq_len: 10000 data_type: int4 return_cum_log_probs: 0 world_size: 4 local_rank: 0

Traceback (most recent call last): File "/FasterTransformer/examples/pytorch/glm/glm_server.py", line 97, in if not glm.load(ckpt_path=args.ckpt_path): File "/FasterTransformer/examples/pytorch/glm/../../../examples/pytorch/glm/utils/glm.py", line 320, in load is_load = self.weights.load(ckpt_path, tensor_para_rank=self.tensor_para_rank, File "/FasterTransformer/examples/pytorch/glm/../../../examples/pytorch/glm/utils/glm.py", line 190, in load scale.extend([module[f'transformer.layers.{i}.attention.query_key_value.weight_scale'].reshape(head_num, num_splits, size_per_head).permute(1, 0, 2).reshape(3, local_dim) for i in range(layer_num)]) File "/FasterTransformer/examples/pytorch/glm/../../../examples/pytorch/glm/utils/glm.py", line 190, in scale.extend([module[f'transformer.layers.{i}.attention.query_key_value.weight_scale'].reshape(head_num, num_splits, size_per_head).permute(1, 0, 2).reshape(3, local_dim) for i in range(layer_num)]) RuntimeError: shape '[24, 3, 128]' is invalid for input of size 4608

sxy799 commented 1 year ago

I fixed it,set MPSIZE=8 FasterTransformer/examples/pytorch/glm/glm_server.sh is ok

Author might have made a typo, I guess