oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.2k stars 5.27k forks source link

8bit mode on older GPUs to reduce the VRAM usage by half #42

Closed lolxdmainkaisemaanlu closed 1 year ago

lolxdmainkaisemaanlu commented 1 year ago

EDIT2: It gives this error while generating now: 0%| | 0/26 [00:00<?, ?it/s]cuBLAS API failed with status 15 error detectedA: torch.Size([50, 2560]), B: torch.Size([2560, 2560]), C: (50, 2560); (lda, ldb, ldc): (c_long(1600), c_long(81920), c_long(1600)); (m, n, k): (c_long(50), c_long(2560), c_long(2560)) 0%| | 0/26 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\blocks.py", line 1015, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\blocks.py", line 868, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\utils.py", line 408, in async_iteration return next(iterator) File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 426, in cai_chatbot_wrapper for _history in chatbot_wrapper(text, tokens, inference_settings, selected_model, name1, name2, context, check, history_size): File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 404, in chatbot_wrapper for reply in generate_reply(question, tokens, inference_settings, selected_model, eos_token=eos_token, stopping_string=f"\n{name1}:"): File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 217, in generate_reply output = eval(f"model.generate(input_ids, eos_token_id={n}, stopping_criteria=stopping_criteria_list, {preset}){cuda}") File "", line 1, in File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1571, in generate return self.sample( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 2534, in sample outputs = self( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(args, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 744, in forward transformer_outputs = self.transformer( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(*args, *kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 623, in forward outputs = block( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(*args, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 328, in forward attn_outputs = self.attn( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(args, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 280, in forward return self.attention( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(*args, *kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 224, in forward query = self.q_proj(hidden_states) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 156, in new_forward output = old_forward(*args, **kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\nn\modules.py", line 254, in forward out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\autograd_functions.py", line 405, in matmul return MatMul8bitLt.apply(A, B, out, bias, state) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\autograd_functions.py", line 311, in forward out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\functional.py", line 1410, in igemmlt raise Exception('cublasLt ran into an error!') Exception: cublasLt ran into an error!

EDIT: I managed to get it to work! This helped me further: https://github.com/oobabooga/text-generation-webui/issues/20#issuecomment-1411650652

Pygmalion 2.7B used to take around 5.9 GB VRAM out of my 6GB GTX 1060 but now it takes 3.8GB VRAM ( Out of which 0.4 is prolly being used by the system as I don't have inbuilt graphics ) only!!! I'll try to see if I can fit Pygmalion 6B on my 6GB VRAM + 16 GB RAM + NVME.

https://github.com/james-things/bitsandbytes-prebuilt-all_arch

^^ "This repository contains builds of the bitsandbytes library compiled with the "all" option for GPU architecture support. They are useful if you are running into issues running bitsandbytes on older Nvidia GPUs. In theory, support exists for Kepler, Maxwell, Pascal, Volta, and newer GPUs."

It personally works for me in training LoRA using Kohya's scripts. I have a GTX 1060 6GB which does not natively support it but it is able to train fine when I use these prebuilt bitsandbytes.

https://github.com/kohya-ss/sd-scripts/issues/44#issuecomment-1375690372 I tried to do the same steps here which worked for me previously but since i'm not a coder as such, I'm having difficulties implementing it.

Ph0rk0z commented 1 year ago

I think this cannot work because you are missing the multiplication of int8s. I have asked on bitsandbytes if this will ever be possible on pascal GPUs. I think for both this and kobold the multiplication function is required but for training it might not be?

lolxdmainkaisemaanlu commented 1 year ago

Getting Error : RuntimeError: probability tensor contains either inf, nan or element < 0

(textgen) C:\Users\Siddhesh\Desktop\text-generation-webui>python server.py --load-in-8bit --cai-chat --no-stream Loading pygmalion-2.7b...

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues Loaded the model in 5.64 seconds. Running on local URL: http://127.0.0.1:7860/

To create a public link, set share=True in launch(). Traceback (most recent call last): File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\blocks.py", line 1015, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\blocks.py", line 868, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\gradio\utils.py", line 408, in async_iteration return next(iterator) File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 465, in cai_chatbot_wrapper for _history in chatbot_wrapper(text, tokens, inference_settings, selected_model, name1, name2, context, check, history_size): File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 443, in chatbot_wrapper for reply in generate_reply(question, tokens, inference_settings, selected_model, eos_token=eos_token, stopping_string=f"\n{name1}:"): File "C:\Users\Siddhesh\Desktop\text-generation-webui\server.py", line 242, in generate_reply output = eval(f"model.generate(input_ids, {','.join(generate_params)}, {preset}){cuda}") File "", line 1, in File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1571, in generate return self.sample( File "C:\Users\Siddhesh\miniconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 2570, in sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either inf, nan or element < 0

github-actions[bot] commented 1 year ago

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, you can reopen it (if you are the author) or leave a comment below.