tloen / alpaca-lora

Instruct-tune LLaMA on consumer hardware
Apache License 2.0
18.64k stars 2.22k forks source link

AttributeError: 'NoneType' object has no attribute 'to' #22

Open s1530129650 opened 1 year ago

s1530129650 commented 1 year ago

code python generate.py

error


===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /home/t-enshengshi/anaconda3/envs/alpaca-lora did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.0
CUDA SETUP: Detected CUDA version 112
/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
  warn(msg)
CUDA SETUP: Loading binary /home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda112_nocublaslt.so...
Downloading tokenizer.model: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 8.89MB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.00/2.00 [00:00<00:00, 173B/s]
Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 141/141 [00:00<00:00, 48.2kB/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 427/427 [00:00<00:00, 46.2kB/s]
Downloading (…)model.bin.index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.5k/25.5k [00:00<00:00, 333kB/s]
Downloading (…)l-00001-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:02<00:00, 169MB/s]
Downloading (…)l-00002-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 103MB/s]
Downloading (…)l-00003-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:07<00:00, 57.7MB/s]
Downloading (…)l-00004-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 118MB/s]
Downloading (…)l-00005-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 96.5MB/s]
Downloading (…)l-00006-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:07<00:00, 51.3MB/s]
Downloading (…)l-00007-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 104MB/s]
Downloading (…)l-00008-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 94.4MB/s]
Downloading (…)l-00009-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 97.8MB/s]
Downloading (…)l-00010-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 119MB/s]
Downloading (…)l-00011-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 122MB/s]
Downloading (…)l-00012-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 128MB/s]
Downloading (…)l-00013-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 92.4MB/s]
Downloading (…)l-00014-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 114MB/s]
Downloading (…)l-00015-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 109MB/s]
Downloading (…)l-00016-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 120MB/s]
Downloading (…)l-00017-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 120MB/s]
Downloading (…)l-00018-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 130MB/s]
Downloading (…)l-00019-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 84.2MB/s]
Downloading (…)l-00020-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 86.4MB/s]
Downloading (…)l-00021-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 99.9MB/s]
Downloading (…)l-00022-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 110MB/s]
Downloading (…)l-00023-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 105MB/s]
Downloading (…)l-00024-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 122MB/s]
Downloading (…)l-00025-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 117MB/s]
Downloading (…)l-00026-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 96.9MB/s]
Downloading (…)l-00027-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 129MB/s]
Downloading (…)l-00028-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 96.6MB/s]
Downloading (…)l-00029-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:02<00:00, 158MB/s]
Downloading (…)l-00030-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 101MB/s]
Downloading (…)l-00031-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:04<00:00, 100MB/s]
Downloading (…)l-00032-of-00033.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 405M/405M [00:03<00:00, 108MB/s]
Downloading (…)l-00033-of-00033.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 524M/524M [00:05<00:00, 94.0MB/s]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:08<00:00,  3.78it/s]
Some weights of the model checkpoint at decapoda-research/llama-7b-hf were not used when initializing LLaMAForCausalLM: ['model.layers.6.input_layernorm.weight', 'model.layers.11.self_attn.rotary_emb.inv_freq', 'model.layers.19.mlp.down_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.10.mlp.down_proj.weight', 'model.layers.24.self_attn.rotary_emb.inv_freq', 'model.layers.10.mlp.up_proj.weight', 'model.layers.19.input_layernorm.weight', 'model.layers.28.post_attention_layernorm.weight', 'model.layers.22.self_attn.rotary_emb.inv_freq', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.11.self_attn.o_proj.weight', 'model.layers.1.post_attention_layernorm.weight', 'model.layers.4.mlp.gate_proj.weight', 'model.layers.17.self_attn.o_proj.weight', 'model.layers.6.self_attn.o_proj.weight', 'model.layers.31.self_attn.rotary_emb.inv_freq', 'model.layers.5.post_attention_layernorm.weight', 'model.layers.30.mlp.gate_proj.weight', 'model.layers.19.self_attn.rotary_emb.inv_freq', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.27.self_attn.o_proj.weight', 'model.layers.20.self_attn.o_proj.weight', 'model.layers.19.post_attention_layernorm.weight', 'model.layers.1.mlp.down_proj.weight', 'model.layers.3.post_attention_layernorm.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.21.mlp.down_proj.weight', 'model.layers.12.self_attn.o_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.3.mlp.gate_proj.weight', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.20.post_attention_layernorm.weight', 'model.layers.5.mlp.gate_proj.weight', 'model.layers.12.self_attn.rotary_emb.inv_freq', 'model.layers.4.mlp.up_proj.weight', 'model.layers.3.self_attn.rotary_emb.inv_freq', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.11.post_attention_layernorm.weight', 'model.layers.18.mlp.up_proj.weight', 'model.layers.15.self_attn.rotary_emb.inv_freq', 'model.layers.7.mlp.down_proj.weight', 'model.layers.21.mlp.gate_proj.weight', 'model.layers.23.mlp.down_proj.weight', 'model.layers.31.input_layernorm.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.5.mlp.up_proj.weight', 'model.layers.5.self_attn.rotary_emb.inv_freq', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.8.mlp.down_proj.weight', 'model.layers.10.self_attn.rotary_emb.inv_freq', 'model.layers.27.self_attn.rotary_emb.inv_freq', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.19.mlp.gate_proj.weight', 'model.layers.25.mlp.down_proj.weight', 'model.layers.28.self_attn.rotary_emb.inv_freq', 'model.layers.6.mlp.down_proj.weight', 'model.layers.7.input_layernorm.weight', 'model.layers.14.mlp.down_proj.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.13.post_attention_layernorm.weight', 'model.layers.28.self_attn.o_proj.weight', 'model.layers.24.input_layernorm.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.28.input_layernorm.weight', 'model.norm.weight', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.12.mlp.down_proj.weight', 'model.layers.18.post_attention_layernorm.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.29.mlp.down_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.2.input_layernorm.weight', 'model.layers.25.mlp.up_proj.weight', 'model.layers.17.mlp.down_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.29.input_layernorm.weight', 'model.layers.1.self_attn.rotary_emb.inv_freq', 'model.layers.24.mlp.down_proj.weight', 'model.layers.27.mlp.down_proj.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.15.mlp.down_proj.weight', 'model.layers.7.mlp.gate_proj.weight', 'model.layers.2.self_attn.o_proj.weight', 'model.layers.22.post_attention_layernorm.weight', 'model.layers.29.self_attn.rotary_emb.inv_freq', 'model.layers.11.mlp.up_proj.weight', 'model.layers.31.mlp.up_proj.weight', 'model.layers.13.mlp.gate_proj.weight', 'model.layers.18.mlp.down_proj.weight', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.21.mlp.up_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.15.mlp.up_proj.weight', 'model.layers.11.mlp.gate_proj.weight', 'model.layers.26.self_attn.rotary_emb.inv_freq', 'model.layers.17.mlp.gate_proj.weight', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.6.mlp.up_proj.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.2.mlp.up_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.24.post_attention_layernorm.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.7.mlp.up_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.1.self_attn.o_proj.weight', 'model.layers.19.self_attn.o_proj.weight', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.11.mlp.down_proj.weight', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.30.input_layernorm.weight', 'model.layers.8.input_layernorm.weight', 'model.layers.26.mlp.up_proj.weight', 'model.layers.4.mlp.down_proj.weight', 'model.layers.22.mlp.up_proj.weight', 'model.layers.25.post_attention_layernorm.weight', 'model.layers.27.mlp.up_proj.weight', 'model.layers.23.self_attn.rotary_emb.inv_freq', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.20.input_layernorm.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.26.mlp.gate_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.23.self_attn.o_proj.weight', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.1.mlp.up_proj.weight', 'model.layers.10.input_layernorm.weight', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.30.mlp.down_proj.weight', 'model.layers.22.mlp.gate_proj.weight', 'model.layers.3.input_layernorm.weight', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.22.input_layernorm.weight', 'model.layers.0.self_attn.rotary_emb.inv_freq', 'model.layers.24.mlp.gate_proj.weight', 'model.layers.5.input_layernorm.weight', 'model.layers.16.post_attention_layernorm.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.18.input_layernorm.weight', 'model.layers.12.mlp.gate_proj.weight', 'model.layers.16.mlp.down_proj.weight', 'model.layers.14.mlp.up_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.9.mlp.down_proj.weight', 'model.layers.16.input_layernorm.weight', 'model.layers.31.post_attention_layernorm.weight', 'model.layers.3.mlp.up_proj.weight', 'model.layers.29.post_attention_layernorm.weight', 'model.layers.13.mlp.down_proj.weight', 'model.layers.4.self_attn.rotary_emb.inv_freq', 'model.layers.27.mlp.gate_proj.weight', 'model.layers.6.mlp.gate_proj.weight', 'model.layers.8.mlp.gate_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.14.self_attn.rotary_emb.inv_freq', 'model.layers.26.input_layernorm.weight', 'model.layers.21.post_attention_layernorm.weight', 'model.layers.6.post_attention_layernorm.weight', 'model.layers.15.self_attn.o_proj.weight', 'model.layers.23.mlp.gate_proj.weight', 'model.layers.2.post_attention_layernorm.weight', 'model.layers.4.post_attention_layernorm.weight', 'model.layers.9.input_layernorm.weight', 'model.layers.16.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.21.self_attn.rotary_emb.inv_freq', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.6.self_attn.rotary_emb.inv_freq', 'model.layers.28.mlp.down_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.13.input_layernorm.weight', 'model.layers.10.mlp.gate_proj.weight', 'model.layers.31.self_attn.o_proj.weight', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.18.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.12.mlp.up_proj.weight', 'model.layers.27.input_layernorm.weight', 'model.layers.24.mlp.up_proj.weight', 'model.layers.13.mlp.up_proj.weight', 'model.layers.23.mlp.up_proj.weight', 'model.layers.9.self_attn.rotary_emb.inv_freq', 'model.layers.25.input_layernorm.weight', 'model.layers.25.self_attn.o_proj.weight', 'model.layers.29.self_attn.o_proj.weight', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.13.self_attn.rotary_emb.inv_freq', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.12.input_layernorm.weight', 'model.layers.31.mlp.gate_proj.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.7.self_attn.rotary_emb.inv_freq', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.31.mlp.down_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.18.self_attn.o_proj.weight', 'model.layers.17.mlp.up_proj.weight', 'model.layers.30.mlp.up_proj.weight', 'model.layers.7.self_attn.o_proj.weight', 'model.layers.18.mlp.gate_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.22.mlp.down_proj.weight', 'model.layers.28.mlp.up_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.14.post_attention_layernorm.weight', 'model.layers.28.mlp.gate_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.embed_tokens.weight', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.29.mlp.gate_proj.weight', 'model.layers.25.self_attn.rotary_emb.inv_freq', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.8.mlp.up_proj.weight', 'model.layers.14.self_attn.o_proj.weight', 'model.layers.20.mlp.gate_proj.weight', 'model.layers.16.mlp.gate_proj.weight', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.30.self_attn.rotary_emb.inv_freq', 'model.layers.16.mlp.up_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.11.input_layernorm.weight', 'model.layers.26.mlp.down_proj.weight', 'model.layers.9.post_attention_layernorm.weight', 'model.layers.1.input_layernorm.weight', 'model.layers.21.self_attn.o_proj.weight', 'model.layers.17.post_attention_layernorm.weight', 'model.layers.20.mlp.up_proj.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.15.mlp.gate_proj.weight', 'model.layers.5.self_attn.o_proj.weight', 'model.layers.1.mlp.gate_proj.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.2.mlp.gate_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.7.post_attention_layernorm.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.14.input_layernorm.weight', 'model.layers.10.post_attention_layernorm.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.3.self_attn.o_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.23.post_attention_layernorm.weight', 'model.layers.27.post_attention_layernorm.weight', 'model.layers.29.mlp.up_proj.weight', 'model.layers.19.mlp.up_proj.weight', 'model.layers.20.mlp.down_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.30.self_attn.o_proj.weight', 'model.layers.25.mlp.gate_proj.weight', 'model.layers.13.self_attn.o_proj.weight', 'model.layers.4.input_layernorm.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.20.self_attn.rotary_emb.inv_freq', 'model.layers.30.post_attention_layernorm.weight', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.2.self_attn.rotary_emb.inv_freq', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.4.self_attn.o_proj.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.mlp.gate_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.22.self_attn.o_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.24.self_attn.o_proj.weight', 'model.layers.2.mlp.down_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.14.mlp.gate_proj.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.8.self_attn.rotary_emb.inv_freq', 'model.layers.10.self_attn.o_proj.weight', 'model.layers.8.self_attn.o_proj.weight', 'model.layers.3.mlp.down_proj.weight', 'model.layers.15.post_attention_layernorm.weight', 'model.layers.12.post_attention_layernorm.weight', 'model.layers.17.self_attn.rotary_emb.inv_freq', 'model.layers.17.input_layernorm.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.15.input_layernorm.weight', 'model.layers.9.mlp.up_proj.weight', 'model.layers.21.input_layernorm.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.8.post_attention_layernorm.weight', 'model.layers.23.input_layernorm.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.16.self_attn.o_proj.weight', 'model.layers.13.self_attn.k_proj.weight']
- This IS expected if you are initializing LLaMAForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LLaMAForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LLaMAForCausalLM were not initialized from the model checkpoint at decapoda-research/llama-7b-hf and are newly initialized: ['model.decoder.layers.25.feed_forward.w3.weight', 'model.decoder.layers.25.self_attn.k_proj.weight', 'model.decoder.layers.22.ffn_norm.weight', 'model.decoder.layers.23.self_attn.o_proj.weight', 'model.decoder.layers.22.self_attn.o_proj.weight', 'model.decoder.layers.26.feed_forward.w2.weight', 'model.decoder.layers.6.self_attn.o_proj.weight', 'model.decoder.layers.14.self_attn.q_proj.weight', 'model.decoder.layers.17.attention_norm.weight', 'model.decoder.layers.19.self_attn.o_proj.weight', 'model.decoder.layers.15.feed_forward.w1.weight', 'model.decoder.layers.21.feed_forward.w2.weight', 'model.decoder.layers.10.self_attn.o_proj.weight', 'model.decoder.layers.24.ffn_norm.weight', 'model.decoder.layers.11.feed_forward.w2.weight', 'model.decoder.layers.15.self_attn.k_proj.weight', 'model.decoder.layers.13.attention_norm.weight', 'model.decoder.layers.6.attention_norm.weight', 'model.decoder.layers.7.attention_norm.weight', 'model.decoder.layers.8.feed_forward.w2.weight', 'model.decoder.layers.18.self_attn.q_proj.weight', 'model.decoder.layers.26.feed_forward.w3.weight', 'model.decoder.layers.15.self_attn.o_proj.weight', 'model.decoder.layers.28.attention_norm.weight', 'model.decoder.layers.31.self_attn.o_proj.weight', 'model.decoder.layers.16.self_attn.o_proj.weight', 'model.decoder.layers.17.self_attn.k_proj.weight', 'model.decoder.layers.13.self_attn.q_proj.weight', 'model.decoder.layers.21.self_attn.o_proj.weight', 'model.decoder.layers.28.self_attn.v_proj.weight', 'model.decoder.layers.30.self_attn.o_proj.weight', 'model.decoder.layers.1.self_attn.o_proj.weight', 'model.decoder.layers.15.self_attn.v_proj.weight', 'model.decoder.layers.1.feed_forward.w1.weight', 'model.decoder.layers.1.feed_forward.w3.weight', 'model.decoder.layers.8.self_attn.v_proj.weight', 'model.decoder.layers.21.self_attn.q_proj.weight', 'model.decoder.layers.3.self_attn.v_proj.weight', 'model.decoder.layers.18.ffn_norm.weight', 'model.decoder.layers.22.feed_forward.w3.weight', 'model.decoder.layers.27.ffn_norm.weight', 'model.decoder.layers.8.ffn_norm.weight', 'model.decoder.layers.8.self_attn.k_proj.weight', 'model.decoder.layers.24.feed_forward.w3.weight', 'model.decoder.layers.14.feed_forward.w3.weight', 'model.decoder.layers.16.attention_norm.weight', 'model.decoder.layers.5.feed_forward.w3.weight', 'model.decoder.layers.11.feed_forward.w3.weight', 'model.decoder.layers.4.attention_norm.weight', 'model.decoder.layers.21.ffn_norm.weight', 'model.decoder.layers.28.self_attn.o_proj.weight', 'model.decoder.layers.30.self_attn.k_proj.weight', 'model.decoder.layers.14.feed_forward.w1.weight', 'model.decoder.layers.16.feed_forward.w2.weight', 'model.decoder.layers.24.feed_forward.w2.weight', 'model.decoder.layers.6.self_attn.k_proj.weight', 'model.decoder.layers.20.attention_norm.weight', 'model.decoder.layers.15.attention_norm.weight', 'model.decoder.layers.3.self_attn.q_proj.weight', 'model.decoder.layers.17.self_attn.o_proj.weight', 'model.decoder.layers.25.feed_forward.w1.weight', 'model.decoder.layers.1.feed_forward.w2.weight', 'model.decoder.layers.19.attention_norm.weight', 'model.decoder.layers.13.feed_forward.w1.weight', 'model.decoder.layers.1.self_attn.k_proj.weight', 'model.decoder.layers.20.feed_forward.w2.weight', 'model.decoder.layers.17.ffn_norm.weight', 'model.decoder.layers.12.attention_norm.weight', 'model.decoder.layers.23.ffn_norm.weight', 'model.decoder.layers.14.self_attn.k_proj.weight', 'model.decoder.layers.26.feed_forward.w1.weight', 'model.decoder.layers.6.feed_forward.w1.weight', 'model.decoder.layers.12.feed_forward.w3.weight', 'model.decoder.layers.0.feed_forward.w3.weight', 'model.decoder.layers.22.feed_forward.w1.weight', 'model.decoder.layers.29.feed_forward.w1.weight', 'model.decoder.layers.19.self_attn.k_proj.weight', 'model.decoder.layers.11.self_attn.v_proj.weight', 'model.decoder.layers.13.feed_forward.w3.weight', 'model.decoder.layers.4.ffn_norm.weight', 'model.decoder.layers.9.attention_norm.weight', 'model.decoder.layers.30.feed_forward.w3.weight', 'model.decoder.layers.17.feed_forward.w1.weight', 'model.decoder.layers.18.self_attn.v_proj.weight', 'model.decoder.layers.29.ffn_norm.weight', 'model.decoder.layers.17.self_attn.q_proj.weight', 'model.decoder.layers.23.feed_forward.w2.weight', 'model.decoder.layers.31.feed_forward.w2.weight', 'model.decoder.layers.11.attention_norm.weight', 'model.decoder.layers.5.feed_forward.w2.weight', 'model.decoder.layers.3.feed_forward.w1.weight', 'model.decoder.layers.29.self_attn.q_proj.weight', 'model.decoder.layers.15.feed_forward.w2.weight', 'model.decoder.layers.2.feed_forward.w1.weight', 'model.decoder.layers.27.feed_forward.w1.weight', 'model.decoder.layers.30.self_attn.q_proj.weight', 'model.decoder.layers.0.attention_norm.weight', 'model.decoder.layers.23.feed_forward.w3.weight', 'model.decoder.layers.5.self_attn.o_proj.weight', 'model.decoder.layers.14.self_attn.v_proj.weight', 'model.decoder.layers.4.feed_forward.w1.weight', 'model.decoder.layers.23.feed_forward.w1.weight', 'model.decoder.layers.29.self_attn.k_proj.weight', 'model.decoder.layers.2.self_attn.v_proj.weight', 'model.decoder.layers.3.ffn_norm.weight', 'model.decoder.layers.11.self_attn.k_proj.weight', 'model.decoder.layers.18.feed_forward.w3.weight', 'model.decoder.layers.27.feed_forward.w2.weight', 'model.decoder.layers.28.self_attn.k_proj.weight', 'model.decoder.layers.31.attention_norm.weight', 'model.decoder.layers.8.feed_forward.w3.weight', 'model.decoder.layers.9.feed_forward.w2.weight', 'model.decoder.layers.10.self_attn.v_proj.weight', 'model.decoder.layers.2.attention_norm.weight', 'model.decoder.layers.14.feed_forward.w2.weight', 'model.decoder.layers.19.feed_forward.w2.weight', 'model.decoder.layers.21.attention_norm.weight', 'model.decoder.layers.24.attention_norm.weight', 'model.decoder.layers.26.self_attn.k_proj.weight', 'model.decoder.layers.26.self_attn.v_proj.weight', 'model.decoder.layers.9.feed_forward.w1.weight', 'model.decoder.layers.31.feed_forward.w1.weight', 'model.decoder.layers.13.self_attn.k_proj.weight', 'model.decoder.layers.6.ffn_norm.weight', 'model.decoder.layers.8.attention_norm.weight', 'model.decoder.layers.19.feed_forward.w3.weight', 'model.decoder.layers.27.feed_forward.w3.weight', 'model.decoder.layers.18.self_attn.o_proj.weight', 'model.decoder.layers.27.self_attn.q_proj.weight', 'model.decoder.layers.8.self_attn.o_proj.weight', 'model.decoder.layers.27.self_attn.v_proj.weight', 'model.decoder.layers.9.self_attn.q_proj.weight', 'model.decoder.layers.6.self_attn.v_proj.weight', 'model.decoder.layers.0.ffn_norm.weight', 'model.decoder.layers.19.self_attn.v_proj.weight', 'model.decoder.layers.22.feed_forward.w2.weight', 'model.decoder.layers.29.self_attn.o_proj.weight', 'model.decoder.layers.1.self_attn.v_proj.weight', 'model.decoder.layers.11.ffn_norm.weight', 'model.decoder.layers.4.self_attn.q_proj.weight', 'model.decoder.layers.10.feed_forward.w3.weight', 'model.decoder.layers.18.self_attn.k_proj.weight', 'model.decoder.layers.16.self_attn.v_proj.weight', 'model.decoder.layers.7.feed_forward.w3.weight', 'model.decoder.layers.23.self_attn.v_proj.weight', 'model.decoder.layers.5.ffn_norm.weight', 'model.decoder.layers.9.feed_forward.w3.weight', 'model.decoder.layers.20.self_attn.k_proj.weight', 'model.decoder.norm.weight', 'model.decoder.layers.3.attention_norm.weight', 'model.decoder.layers.10.ffn_norm.weight', 'model.decoder.layers.17.feed_forward.w2.weight', 'model.decoder.layers.3.self_attn.k_proj.weight', 'model.decoder.layers.5.attention_norm.weight', 'model.decoder.layers.20.ffn_norm.weight', 'model.decoder.layers.4.feed_forward.w2.weight', 'model.decoder.layers.17.feed_forward.w3.weight', 'model.decoder.layers.20.self_attn.o_proj.weight', 'model.decoder.layers.13.self_attn.v_proj.weight', 'model.decoder.layers.22.self_attn.k_proj.weight', 'model.decoder.layers.30.feed_forward.w1.weight', 'model.decoder.layers.4.feed_forward.w3.weight', 'model.decoder.layers.24.feed_forward.w1.weight', 'model.decoder.layers.30.attention_norm.weight', 'model.decoder.layers.7.self_attn.k_proj.weight', 'model.decoder.layers.5.self_attn.v_proj.weight', 'model.decoder.layers.1.attention_norm.weight', 'model.decoder.layers.7.ffn_norm.weight', 'model.decoder.layers.21.self_attn.k_proj.weight', 'model.decoder.layers.31.ffn_norm.weight', 'model.decoder.layers.3.feed_forward.w2.weight', 'model.decoder.layers.7.self_attn.o_proj.weight', 'model.decoder.layers.21.self_attn.v_proj.weight', 'model.decoder.layers.0.self_attn.k_proj.weight', 'model.decoder.layers.19.feed_forward.w1.weight', 'model.decoder.layers.2.self_attn.k_proj.weight', 'model.decoder.layers.24.self_attn.o_proj.weight', 'model.decoder.layers.10.feed_forward.w2.weight', 'model.decoder.layers.9.self_attn.v_proj.weight', 'model.decoder.layers.10.attention_norm.weight', 'model.decoder.layers.23.self_attn.q_proj.weight', 'model.decoder.layers.13.self_attn.o_proj.weight', 'model.decoder.layers.16.feed_forward.w3.weight', 'model.decoder.layers.1.ffn_norm.weight', 'model.decoder.layers.20.feed_forward.w1.weight', 'model.decoder.layers.15.self_attn.q_proj.weight', 'model.decoder.layers.24.self_attn.q_proj.weight', 'model.decoder.layers.2.ffn_norm.weight', 'model.decoder.layers.15.feed_forward.w3.weight', 'model.decoder.layers.29.attention_norm.weight', 'model.decoder.layers.4.self_attn.k_proj.weight', 'model.decoder.layers.10.feed_forward.w1.weight', 'model.decoder.layers.15.ffn_norm.weight', 'model.decoder.layers.12.self_attn.o_proj.weight', 'model.decoder.layers.2.self_attn.o_proj.weight', 'model.decoder.layers.23.self_attn.k_proj.weight', 'model.decoder.layers.25.ffn_norm.weight', 'model.decoder.layers.12.feed_forward.w1.weight', 'model.decoder.layers.28.feed_forward.w3.weight', 'model.decoder.layers.16.self_attn.k_proj.weight', 'model.decoder.layers.28.ffn_norm.weight', 'model.decoder.layers.26.self_attn.q_proj.weight', 'model.decoder.layers.13.ffn_norm.weight', 'model.decoder.layers.20.self_attn.q_proj.weight', 'model.decoder.layers.20.self_attn.v_proj.weight', 'model.decoder.layers.13.feed_forward.w2.weight', 'model.decoder.layers.26.self_attn.o_proj.weight', 'model.decoder.layers.9.ffn_norm.weight', 'model.decoder.layers.24.self_attn.v_proj.weight', 'model.decoder.layers.25.self_attn.o_proj.weight', 'model.decoder.layers.6.self_attn.q_proj.weight', 'model.decoder.layers.25.self_attn.v_proj.weight', 'model.decoder.layers.28.feed_forward.w2.weight', 'model.decoder.layers.22.attention_norm.weight', 'model.decoder.layers.10.self_attn.q_proj.weight', 'model.decoder.layers.27.self_attn.o_proj.weight', 'model.decoder.embed_tokens.weight', 'model.decoder.layers.4.self_attn.o_proj.weight', 'model.decoder.layers.14.ffn_norm.weight', 'model.decoder.layers.25.feed_forward.w2.weight', 'model.decoder.layers.21.feed_forward.w1.weight', 'model.decoder.layers.7.feed_forward.w2.weight', 'model.decoder.layers.0.feed_forward.w1.weight', 'model.decoder.layers.25.attention_norm.weight', 'model.decoder.layers.25.self_attn.q_proj.weight', 'model.decoder.layers.27.attention_norm.weight', 'model.decoder.layers.7.feed_forward.w1.weight', 'model.decoder.layers.29.self_attn.v_proj.weight', 'model.decoder.layers.2.feed_forward.w3.weight', 'model.decoder.layers.17.self_attn.v_proj.weight', 'model.decoder.layers.12.feed_forward.w2.weight', 'model.decoder.layers.5.feed_forward.w1.weight', 'model.decoder.layers.3.self_attn.o_proj.weight', 'model.decoder.layers.6.feed_forward.w2.weight', 'model.decoder.layers.11.self_attn.o_proj.weight', 'model.decoder.layers.19.ffn_norm.weight', 'model.decoder.layers.9.self_attn.o_proj.weight', 'model.decoder.layers.18.feed_forward.w1.weight', 'model.decoder.layers.0.feed_forward.w2.weight', 'model.decoder.layers.26.ffn_norm.weight', 'model.decoder.layers.30.self_attn.v_proj.weight', 'model.decoder.layers.27.self_attn.k_proj.weight', 'model.decoder.layers.30.feed_forward.w2.weight', 'model.decoder.layers.31.self_attn.k_proj.weight', 'model.decoder.layers.19.self_attn.q_proj.weight', 'model.decoder.layers.30.ffn_norm.weight', 'model.decoder.layers.2.feed_forward.w2.weight', 'model.decoder.layers.8.self_attn.q_proj.weight', 'model.decoder.layers.22.self_attn.q_proj.weight', 'model.decoder.layers.29.feed_forward.w3.weight', 'model.decoder.layers.0.self_attn.o_proj.weight', 'model.decoder.layers.5.self_attn.q_proj.weight', 'model.decoder.layers.11.feed_forward.w1.weight', 'model.decoder.layers.12.self_attn.k_proj.weight', 'model.decoder.layers.12.ffn_norm.weight', 'model.decoder.layers.28.self_attn.q_proj.weight', 'model.decoder.layers.14.self_attn.o_proj.weight', 'model.decoder.layers.29.feed_forward.w2.weight', 'model.decoder.layers.11.self_attn.q_proj.weight', 'model.decoder.layers.3.feed_forward.w3.weight', 'model.decoder.layers.0.self_attn.v_proj.weight', 'model.decoder.layers.7.self_attn.v_proj.weight', 'model.decoder.layers.12.self_attn.q_proj.weight', 'model.decoder.layers.21.feed_forward.w3.weight', 'model.decoder.layers.23.attention_norm.weight', 'model.decoder.layers.16.ffn_norm.weight', 'model.decoder.layers.9.self_attn.k_proj.weight', 'model.decoder.layers.10.self_attn.k_proj.weight', 'model.decoder.layers.31.feed_forward.w3.weight', 'model.decoder.layers.0.self_attn.q_proj.weight', 'model.decoder.layers.2.self_attn.q_proj.weight', 'model.decoder.layers.18.feed_forward.w2.weight', 'model.decoder.layers.18.attention_norm.weight', 'model.decoder.layers.1.self_attn.q_proj.weight', 'model.decoder.layers.4.self_attn.v_proj.weight', 'model.decoder.layers.22.self_attn.v_proj.weight', 'model.decoder.layers.24.self_attn.k_proj.weight', 'model.decoder.layers.6.feed_forward.w3.weight', 'model.decoder.layers.31.self_attn.v_proj.weight', 'model.decoder.layers.16.feed_forward.w1.weight', 'model.decoder.layers.16.self_attn.q_proj.weight', 'model.decoder.layers.5.self_attn.k_proj.weight', 'model.decoder.layers.20.feed_forward.w3.weight', 'model.decoder.layers.12.self_attn.v_proj.weight', 'model.decoder.layers.8.feed_forward.w1.weight', 'model.decoder.layers.28.feed_forward.w1.weight', 'model.decoder.layers.31.self_attn.q_proj.weight', 'model.decoder.layers.14.attention_norm.weight', 'model.decoder.layers.7.self_attn.q_proj.weight', 'model.decoder.layers.26.attention_norm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Downloading (…)neration_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 124/124 [00:00<00:00, 24.8kB/s]
Downloading (…)/adapter_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 370/370 [00:00<00:00, 163kB/s]
Downloading adapter_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16.8M/16.8M [00:00<00:00, 89.7MB/s]
Instruction: Tell me about alpacas.
Traceback (most recent call last):
  File "/home/t-enshengshi/workspace/alpaca-lora/generate.py", line 77, in <module>
    print("Response:", evaluate(instruction))
  File "/home/t-enshengshi/workspace/alpaca-lora/generate.py", line 51, in evaluate
    generation_output = model.generate(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/peft/peft_model.py", line 581, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/generation/utils.py", line 1490, in generate
    return self.beam_search(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/generation/utils.py", line 2749, in beam_search
    outputs = self(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 852, in forward
    outputs = self.model.decoder(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 624, in forward
    layer_outputs = decoder_layer(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 305, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 165, in forward
    query_states = self.q_proj(hidden_states).view(bsz, tgt_len, self.num_heads, self.head_dim)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/peft/tuners/lora.py", line 522, in forward
    result = super().forward(x)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 242, in forward
    out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul
    return MatMul8bitLt.apply(A, B, out, bias, state)
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/home/t-enshengshi/anaconda3/envs/alpaca-lora/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 390, in forward
    output = torch.nn.functional.linear(A_wo_outliers, state.CB.to(A.dtype))
AttributeError: 'NoneType' object has no attribute 'to'
T-Atlas commented 1 year ago

Try this : https://github.com/tloen/alpaca-lora/issues/14#issuecomment-1471263165