The system cannot find the file specified.
The following flags have been taken from the environment variable 'OOBABOOGA_FLAGS':
--fkdlsja >nul 2>&1 & python bot.py --token --chat --model-menu --threads 6 --cpu --load-in-4bit --auto-launch
To use the CMD_FLAGS Inside webui.py, unset 'OOBABOOGA_FLAGS'.
bin E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll
E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
function 'cadam32bit_grad_fp32' not found
The following models are available:
INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models\pygmalion-6b-v3-ggml-ggjt-q4_0.bin
Traceback (most recent call last):
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\bot.py", line 281, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\models.py", line 97, in load_model
output = load_func(model_name)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\models.py", line 274, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 50, in from_pretrained
self.model = Llama(**params)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 193, in init
self.ctx = llama_cpp.llama_init_from_file(
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 262, in llama_init_from_file
return _lib.llama_init_from_file(path_model, params)
OSError: [WinError -529697949] Windows Error 0xe06d7363
Exception ignored in: <function Llama.del at 0x000002B2E96D9A20>
Traceback (most recent call last):
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 1333, in del
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
Exception ignored in: <function LlamaCppModel.del at 0x000002B2E968A3B0>
Traceback (most recent call last):
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 23, in del
self.model.del()
AttributeError: 'LlamaCppModel' object has no attribute 'model'
Press any key to continue . . .
The system cannot find the file specified. The following flags have been taken from the environment variable 'OOBABOOGA_FLAGS': --fkdlsja >nul 2>&1 & python bot.py --token --chat --model-menu --threads 6 --cpu --load-in-4bit --auto-launch To use the CMD_FLAGS Inside webui.py, unset 'OOBABOOGA_FLAGS'.
bin E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " function 'cadam32bit_grad_fp32' not found The following models are available:
Which one do you want to load? 1-6
5
INFO:Loading pygmalion-6b-v3-ggml-ggjt-q4_0.bin... INFO:llama.cpp weights detected: models\pygmalion-6b-v3-ggml-ggjt-q4_0.bin
INFO:Cache capacity is 0 bytes llama.cpp: loading model from models\pygmalion-6b-v3-ggml-ggjt-q4_0.bin Traceback (most recent call last): File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\bot.py", line 281, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\models.py", line 97, in load_model
output = load_func(model_name)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\models.py", line 274, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 50, in from_pretrained
self.model = Llama(**params)
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 193, in init
self.ctx = llama_cpp.llama_init_from_file(
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 262, in llama_init_from_file
return _lib.llama_init_from_file(path_model, params)
OSError: [WinError -529697949] Windows Error 0xe06d7363
Exception ignored in: <function Llama.del at 0x000002B2E96D9A20>
Traceback (most recent call last):
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 1333, in del
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
Exception ignored in: <function LlamaCppModel.del at 0x000002B2E968A3B0>
Traceback (most recent call last):
File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 23, in del
self.model.del()
AttributeError: 'LlamaCppModel' object has no attribute 'model'
Press any key to continue . . .