bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6k stars 606 forks source link

AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit' #416

Closed hennypurwadi closed 1 year ago

hennypurwadi commented 1 year ago

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('8013'), PosixPath('//172.28.0.1')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-2r5rpr2idmjk0 --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get CUDA error: invalid device function errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <cell line: 10>:10 │ │ │ │ /usr/local/lib/python3.10/dist-packages/peft/init.py:22 in │ │ │ │ 19 │ │ 20 version = "0.4.0.dev0" │ │ 21 │ │ ❱ 22 from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIGMAPPING, get │ │ 23 from .peft_model import ( │ │ 24 │ PeftModel, │ │ 25 │ PeftModelForCausalLM, │ │ │ │ /usr/local/lib/python3.10/dist-packages/peft/mapping.py:16 in │ │ │ │ 13 # See the License for the specific language governing permissions and │ │ 14 # limitations under the License. │ │ 15 │ │ ❱ 16 from .peft_model import ( │ │ 17 │ PeftModel, │ │ 18 │ PeftModelForCausalLM, │ │ 19 │ PeftModelForSeq2SeqLM, │ │ │ │ /usr/local/lib/python3.10/dist-packages/peft/peft_model.py:31 in │ │ │ │ 28 from transformers.modeling_outputs import SequenceClassifierOutput, TokenClassifierOutpu │ │ 29 from transformers.utils import PushToHubMixin │ │ 30 │ │ ❱ 31 from .tuners import ( │ │ 32 │ AdaLoraModel, │ │ 33 │ AdaptionPromptModel, │ │ 34 │ LoraModel, │ │ │ │ /usr/local/lib/python3.10/dist-packages/peft/tuners/init.py:21 in │ │ │ │ 18 # limitations under the License. │ │ 19 │ │ 20 from .adaption_prompt import AdaptionPromptConfig, AdaptionPromptModel │ │ ❱ 21 from .lora import LoraConfig, LoraModel │ │ 22 from .adalora import AdaLoraConfig, AdaLoraModel │ │ 23 from .p_tuning import PromptEncoder, PromptEncoderConfig, PromptEncoderReparameterizatio │ │ 24 from .prefix_tuning import PrefixEncoder, PrefixTuningConfig │ │ │ │ /usr/local/lib/python3.10/dist-packages/peft/tuners/lora.py:735 in │ │ │ │ 732 │ │ │ │ result += output │ │ 733 │ │ │ return result │ │ 734 │ │ │ ❱ 735 │ class Linear4bit(bnb.nn.Linear4bit, LoraLayer): │ │ 736 │ │ # Lora implemented in a dense layer │ │ 737 │ │ def init( │ │ 738 │ │ │ self, │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'

Erthar commented 1 year ago

I am seeing the same issue as well. Everything was working a couple of days ago.

│                                                                              │
│   19                                                                         │
│   20 __version__ = "0.4.0.dev0"                                              │
│   21                                                                         │
│ ❱ 22 from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CON │
│   23 from .peft_model import (                                               │
│   24 │   PeftModel,                                                          │
│   25 │   PeftModelForCausalLM,                                               │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/peft/mapping.py:16 in <module>       │
│                                                                              │
│    13 # See the License for the specific language governing permissions and  │
│    14 # limitations under the License.                                       │
│    15                                                                        │
│ ❱  16 from .peft_model import (                                              │
│    17 │   PeftModel,                                                         │
│    18 │   PeftModelForCausalLM,                                              │
│    19 │   PeftModelForSeq2SeqLM,                                             │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/peft/peft_model.py:31 in <module>    │
│                                                                              │
│     28 from transformers.modeling_outputs import SequenceClassifierOutput, T │
│     29 from transformers.utils import PushToHubMixin                         │
│     30                                                                       │
│ ❱   31 from .tuners import (                                                 │
│     32 │   AdaLoraModel,                                                     │
│     33 │   AdaptionPromptModel,                                              │
│     34 │   LoraModel,                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/peft/tuners/__init__.py:21 in        │
│ <module>                                                                     │
│                                                                              │
│   18 # limitations under the License.                                        │
│   19                                                                         │
│   20 from .adaption_prompt import AdaptionPromptConfig, AdaptionPromptModel  │
│ ❱ 21 from .lora import LoraConfig, LoraModel                                 │
│   22 from .adalora import AdaLoraConfig, AdaLoraModel                        │
│   23 from .p_tuning import PromptEncoder, PromptEncoderConfig, PromptEncoder │
│   24 from .prefix_tuning import PrefixEncoder, PrefixTuningConfig            │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/peft/tuners/lora.py:735 in <module>  │
│                                                                              │
│   732 │   │   │   │   result += output                                       │
│   733 │   │   │   return result                                              │
│   734 │                                                                      │
│ ❱ 735 │   class Linear4bit(bnb.nn.Linear4bit, LoraLayer):                    │
│   736 │   │   # Lora implemented in a dense layer                            │
│   737 │   │   def __init__(                                                  │
│   738 │   │   │   self,                                                      │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'

... Could it be related to the recent https://github.com/huggingface/peft/pull/476?

spencerthomas1722 commented 1 year ago

Same; this code worked fine yesterday, but now I'm having the same issue. I'm using Google Colab. I thought it might be a bug in a new version or something, but it looks like bitsandbytes hasn't been updated for a month. Code:

from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType

# Define LoRA Config 
lora_config = LoraConfig(
  r=16, 
  lora_alpha=32,
  target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'],
  lora_dropout=0.05,
  bias="none",
  task_type="CAUSAL_LM"
)
# prepare int-8 model for training
model = prepare_model_for_int8_training(model)

# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()

Error:

in <cell line: 1>:1                                                                              │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/peft/__init__.py:22 in <module>                          │
│                                                                                                  │
│   19                                                                                             │
│   20 __version__ = "0.4.0.dev0"                                                                  │
│   21                                                                                             │
│ ❱ 22 from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_    │
│   23 from .peft_model import (                                                                   │
│   24 │   PeftModel,                                                                              │
│   25 │   PeftModelForCausalLM,                                                                   │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/peft/mapping.py:16 in <module>                           │
│                                                                                                  │
│    13 # See the License for the specific language governing permissions and                      │
│    14 # limitations under the License.                                                           │
│    15                                                                                            │
│ ❱  16 from .peft_model import (                                                                  │
│    17 │   PeftModel,                                                                             │
│    18 │   PeftModelForCausalLM,                                                                  │
│    19 │   PeftModelForSeq2SeqLM,                                                                 │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/peft/peft_model.py:31 in <module>                        │
│                                                                                                  │
│     28 from transformers.modeling_outputs import SequenceClassifierOutput, TokenClassifierOutpu  │
│     29 from transformers.utils import PushToHubMixin                                             │
│     30                                                                                           │
│ ❱   31 from .tuners import (                                                                     │
│     32 │   AdaLoraModel,                                                                         │
│     33 │   AdaptionPromptModel,                                                                  │
│     34 │   LoraModel,                                                                            │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/peft/tuners/__init__.py:21 in <module>                   │
│                                                                                                  │
│   18 # limitations under the License.                                                            │
│   19                                                                                             │
│   20 from .adaption_prompt import AdaptionPromptConfig, AdaptionPromptModel                      │
│ ❱ 21 from .lora import LoraConfig, LoraModel                                                     │
│   22 from .adalora import AdaLoraConfig, AdaLoraModel                                            │
│   23 from .p_tuning import PromptEncoder, PromptEncoderConfig, PromptEncoderReparameterizatio    │
│   24 from .prefix_tuning import PrefixEncoder, PrefixTuningConfig                                │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/peft/tuners/lora.py:735 in <module>                      │
│                                                                                                  │
│   732 │   │   │   │   result += output                                                           │
│   733 │   │   │   return result                                                                  │
│   734 │                                                                                          │
│ ❱ 735 │   class Linear4bit(bnb.nn.Linear4bit, LoraLayer):                                        │
│   736 │   │   # Lora implemented in a dense layer                                                │
│   737 │   │   def __init__(                                                                      │
│   738 │   │   │   self,                                                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'
numbersmason commented 1 year ago

Screenshot (150) I got the same issue, I dont know what to do If anyone finds a solution, quote me pls

mikeybellissimo commented 1 year ago

I also got the same issue using the Alpaca LoRA implementation from tloen's github

mikeybellissimo commented 1 year ago

It appears to be an issue within PEFT library. It seems that 3 hours ago they tried to add 4-bit and that is likely the source of the issue. https://github.com/huggingface/peft/pull/476

plbecker commented 1 year ago

Pretty sure it's related to https://github.com/huggingface/peft/pull/476

How to fix it: Wherever in your requirements.txt you use git+https://github.com/huggingface/peft, replace it with git+https://github.com/huggingface/peft@smangrul/release-v0.3.0 for now. That's the latest stable release it seems.

numbersmason commented 1 year ago

It appears to be an issue within PEFT library. It seems that 3 hours ago they tried to add 4-bit and that is likely the source of the issue. huggingface/peft#476

Pardon my ignorance. I m not very knowledgeable in this AI topic tbh. I was just following a tutorial from YT from a channel called "AItrepreneur" (video related: https://www.youtube.com/watch?v=lb_lC4XFedU), and in that video the guy seemed to install a 4bit model with no issues whatsoever. How is it possible that they added 4-bit to the library when it already was able to run these types of models? (I apologize if I misunderstand something in particular)

riyazweb commented 1 year ago

Pretty sure it's related to huggingface/peft#476

How to fix it: Wherever in your requirements.txt you use git+https://github.com/huggingface/peft, replace it with git+https://github.com/huggingface/peft@smangrul/release-v0.3.0 for now. That's the latest stable release it seems.

man its working thank you

numbersmason commented 1 year ago

Pretty sure it's related to huggingface/peft#476

How to fix it: Wherever in your requirements.txt you use git+https://github.com/huggingface/peft, replace it with git+https://github.com/huggingface/peft@smangrul/release-v0.3.0 for now. That's the latest stable release it seems.

I changed it, saved the file, but I run startWindows and the issue persists.

hennypurwadi commented 1 year ago

Thanks. This is works for me 👍 https://github.com/huggingface/peft/pull/476

!pip install git+https://github.com/huggingface/peft@27af2198225cbb9e049f548440f2bd0fba2204aa --force-reinstall --no-deps

younesbelkada commented 1 year ago

hi there, https://github.com/huggingface/peft/pull/480 being merged, you can try to re-install peft again from source and you shouldn't get any error with respect to Linear4bit !

i-am-neo commented 1 year ago

Hi @younesbelkada I'm getting the same error module 'bitsandbytes.nn' has no attribute 'Linear4bit'

Not sure whether to post here or somewhere else.

Cuda version:

Wed Jul  5 21:23:09 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03   Driver Version: 470.182.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A10G         Off  | 00000000:00:1E.0 Off |                    0 |
|  0%   30C    P0    55W / 300W |      0MiB / 22731MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I tried both of below and get the same message. Any pointers? pip install -q -U git+https://github.com/huggingface/peft.git

and

pip install bitsandbytes-cuda114

jiluojiluo commented 1 year ago

我TM在win 11下跑通了,--quantization_bit 4 这个改成 --quantization_bit 8 ,然后把bitsandbytes-windows-main\bitsandbytes 全量拷贝到C:\Users\用户\AppData\Roaming\Python\Python310\site-packages\bitsandbytes 报错原因是bitsandbytes不支持window,bitsandbytes-windows目前仅支持8bit量化。

younesbelkada commented 1 year ago

hi @i-am-neo can you try

pip install --upgrade bitsandbytes

instead of

pip install bitsandbytes-cuda114
ollimacp commented 1 year ago

yes

hi @i-am-neo can you try

pip install --upgrade bitsandbytes

instead of

pip install bitsandbytes-cuda114

yes resolved this errormessage. but another one ahead :D

swumagic commented 10 months ago

Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang) 1 open folder J:\StableDiffusion\sdwebui,Click the address bar of the folder and enter CMD or WIN+R, CMD 。enter,cd /d J:\StableDiffusion\sdwebui 2 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes

3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows

4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl

Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310)