KoboldAI / KoboldAI-Client

https://koboldai.com
GNU Affero General Public License v3.0
3.45k stars 743 forks source link

The process suddenly stops for some reason #426

Closed MiraiiF closed 4 months ago

MiraiiF commented 6 months ago

./play-rocm.sh Colab Check: False, TPU: False INFO | main::732 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Basic Huggingface ExLlama V2 Huggingface GooseAI Legacy GPTQ Horde KoboldCPP OpenAI Read Only INFO | main:general_startup:1447 - Running on Repo: https://github.com/henk717/KoboldAI Branch: united INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:5000 Connection Attempt: 127.0.0.1 INFO | main:do_connect:2587 - Client connected! UI_2 Connection Attempt: 127.0.0.1 INFO | main:do_connect:2587 - Client connected! UI_2 Connection Attempt: 127.0.0.1 INFO | main:do_connect:2587 - Client connected! UI_2 WARNING | modeling.inference_models.generic_hf_torch.class:get_requested_parameters:77 - Bitsandbytes is not installed, you can not use Quantization for Huggingface models TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:198 - {'0_Layers': 32, '1_Layers': 0, 'CPU_Layers': 0, 'Disk_Layers': 0, 'class': 'model', 'label': 'Pygmalion 2.7B', 'id': 'PygmalionAI/pygmalion-2.7b', 'name': 'PygmalionAI/pygmalion-2.7b', 'size': '6GB', 'ismenu': 'false', 'isdownloaded': 'true', 'isdirectory': 'false', 'menu': 'chatlist', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading model tensors: 100%|##########| 484/484 [00:23<00:00, 20.74it/s]INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | main:do_connect:2587 - Client connected! UI_2 PROMPT @ 2024-01-31 00:14:29 | hi The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. Setting pad_token_id to eos_token_id:50256 for open-end generation.

After that, the process stops. Does anyone knows how to solve that? I didn't found anything online about that.