THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.09k stars 417 forks source link

NameError: name 'HackLinearNF4' is not defined #100

Closed freelancerllm closed 1 year ago

freelancerllm commented 1 year ago

$sh finetune/finetune_visualglm_qlora.sh NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --include localhost:0 --hostfile hostfile_single finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 1 --gradient-accumulation-steps 4 --skip-init --fp16 --use_qlora [2023-06-01 20:35:04,581] [WARNING] [runner.py:191:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-06-01 20:35:04,698] [INFO] [runner.py:541:main] cmd = /opt/conda/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=16666 --enable_each_rank_log=None finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 1 --gradient-accumulation-steps 4 --skip-init --fp16 --use_qlora [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_DEBUG=info [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_NET_GDR_LEVEL=2 [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_IB_DISABLE=0 [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 USE_NCCL=1 [2023-06-01 20:35:07,264] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0]} [2023-06-01 20:35:07,264] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=1, node_rank=0 [2023-06-01 20:35:07,264] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]}) [2023-06-01 20:35:07,264] [INFO] [launch.py:247:main] dist_world_size=1 [2023-06-01 20:35:07,264] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

/opt/conda/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/taobao/java/jre/lib/amd64/server')} warn(msg) CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 116 CUDA SETUP: Loading binary /opt/conda/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda116.so... [2023-06-01 20:35:11,849] [WARNING] Failed to load bitsandbytes:cannot import name 'LinearNF4' from 'bitsandbytes.nn' (/opt/conda/lib/python3.8/site-packages/bitsandbytes/nn/init.py) [2023-06-01 20:35:11,852] [INFO] using world size: 1 and model-parallel size: 1 [2023-06-01 20:35:11,852] [INFO] > padded vocab (size: 100) with 28 dummy tokens (new size: 128) 16666 [2023-06-01 20:35:11,853] [INFO] [RANK 0] > initializing model parallel with size 1 [2023-06-01 20:35:11,854] [WARNING] [config_utils.py:69:_process_deprecated_field] Config parameter cpu_offload is deprecated use offload_optimizer instead [2023-06-01 20:35:11,854] [INFO] [checkpointing.py:764:_configure_using_config_file] {'partition_activations': False, 'contiguous_memory_optimization': False, 'cpu_checkpointing': False, 'number_checkpoints': None, 'synchronize_checkpoint_boundary': False, 'profile': False} [2023-06-01 20:35:11,854] [INFO] [checkpointing.py:226:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 [2023-06-01 20:35:11,912] [INFO] [RANK 0] building FineTuneVisualGLMModel model ... /opt/conda/lib/python3.8/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op warnings.warn("Initializing zero-element tensors is a no-op") replacing layer 0 with lora Traceback (most recent call last): File "finetune_visualglm.py", line 179, in model, args = FineTuneVisualGLMModel.from_pretrained(model_type, args) File "/opt/conda/lib/python3.8/site-packages/sat/model/base_model.py", line 214, in from_pretrained model = get_model(args, cls, kwargs) File "/opt/conda/lib/python3.8/site-packages/sat/model/base_model.py", line 305, in get_model model = model_cls(args, params_dtype=params_dtype, kwargs) File "finetune_visualglm.py", line 21, in init self.add_mixin("lora", LoraMixin(args.num_layers, args.lora_rank, head_first=True, num_attention_heads=args.num_attention_heads, hidden_size_per_attention_head=args.hidden_size // args.num_attention_heads, layer_range=args.layer_range, qlora=True), reinit=True) File "/opt/conda/lib/python3.8/site-packages/sat/model/base_model.py", line 129, in add_mixin new_mixin.reinit(self) # also pass current mixins File "/mnt/benteng.bt/VisualGLM-6B/lora_mixin.py", line 201, in reinit parent_model.transformer.layers[i].attention.dense = replace_linear_with_lora(parent_model.transformer.layers[i].attention.dense, LoraLinear, self.r, self.lora_alpha, self.lora_dropout, self.qlora) File "/mnt/benteng.bt/VisualGLM-6B/lora_mixin.py", line 143, in replace_linear_with_lora return base_cls(in_dim, out_dim, r, *args, **kw_args) File "/mnt/benteng.bt/VisualGLM-6B/lora_mixin.py", line 62, in init self.original = HackLinearNF4(in_dim, out_dim) NameError: name 'HackLinearNF4' is not defined [2023-06-01 20:35:21,287] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 393249 [2023-06-01 20:35:21,287] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python', '-u', 'finetune_visualglm.py', '--local_rank=0', '--experiment-name', 'finetune-visualglm-6b', '--model-parallel-size', '1', '--mode', 'finetune', '--train-iters', '300', '--resume-dataloader', '--max_source_length', '64', '--max_target_length', '256', '--lora_rank', '10', '--layer_range', '0', '14', '--pre_seq_len', '4', '--train-data', './fewshot-data/dataset.json', '--valid-data', './fewshot-data/dataset.json', '--distributed-backend', 'nccl', '--lr-decay-style', 'cosine', '--warmup', '.02', '--checkpoint-activations', '--save-interval', '300', '--eval-interval', '10000', '--save', './checkpoints', '--split', '1', '--eval-iters', '10', '--eval-batch-size', '8', '--zero-stage', '1', '--lr', '0.0001', '--batch-size', '1', '--gradient-accumulation-steps', '4', '--skip-init', '--fp16', '--use_qlora'] exits with return code = 1

1049451037 commented 1 year ago

https://github.com/THUDM/VisualGLM-6B/issues/85

1049451037 commented 1 year ago

另外,建议更新到最新的仓库代码,新的代码应该会有提示(也修复了一些bug)

freelancerllm commented 1 year ago

pip install bitsandbytes --upgrade