Closed pro0gaming closed 1 year ago
I don't see an error but you ignored the disallowed code warning for pygmalion and risk your colab account since google banned the model. Hopefully you don't have the issue when you start a fresh colab session without pygmalion.
Oh, I'm sorry, I didn't see the warning
I have a question, are all nsfw models banned, or is there some who are not, and if there are, please give me examples
All NSFW models are against the TOS, but if you don't get the disallowed code warning that model is not currently banned.
The ColabKobold GPU is working fine but it automatically stops and gives me this sign
cell has not been executed in this session previous execution ended unsuccessfully executed at unknown time
. I used pygmalion-2.7b I don't think the problem was that I was using big model. I tried almost all models, none worked, they all repeat the same problem. I tried changing the account and browser and it didn't solve the problem.
That's it until the problem occurred
Tue Jul 18 10:34:27 2023
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 68C P8 11W / 70W | 0MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Mounted at /content/drive/ --2023-07-18 10:35:15-- https://koboldai.org/ckds Resolving koboldai.org (koboldai.org)... 104.21.21.176, 172.67.199.170, 2606:4700:3036::ac43:c7aa, ... Connecting to koboldai.org (koboldai.org)|104.21.21.176|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://raw.githubusercontent.com/henk717/KoboldAI/united/colabkobold.sh [following] --2023-07-18 10:35:15-- https://raw.githubusercontent.com/henk717/KoboldAI/united/colabkobold.sh Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 6797 (6.6K) [text/plain] Saving to: ‘STDOUT’
2023-07-18 10:35:15 (14.9 MB/s) - written to stdout [6797/6797]
mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/stories/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/models/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/settings/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/softprompts/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/userscripts/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/presets/’: File exists mkdir: cannot create directory ‘/content/drive/MyDrive/KoboldAI/themes/’: File exists Initialized empty Git repository in /content/KoboldAI-Client/.git/ fatal: No such remote: 'origin' Fetching origin remote: Enumerating objects: 16864, done. remote: Counting objects: 100% (5413/5413), done. remote: Compressing objects: 100% (364/364), done. remote: Total 16864 (delta 5102), reused 5221 (delta 5007), pack-reused 11451 Receiving objects: 100% (16864/16864), 21.49 MiB | 18.65 MiB/s, done. Resolving deltas: 100% (11744/11744), done. From https://github.com/henk717/KoboldAI-Client
Reading state information... Done The following additional packages will be installed: libaria2-0 libc-ares2 The following NEW packages will be installed: aria2 libaria2-0 libc-ares2 netbase 0 upgraded, 4 newly installed, 0 to remove and 15 not upgraded. Need to get 1,488 kB of archives. After this operation, 6,003 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 netbase all 6.1 [13.1 kB] Get:2 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 libc-ares2 amd64 1.15.0-1ubuntu0.3 [36.8 kB] Get:3 http://archive.ubuntu.com/ubuntu focal/universe amd64 libaria2-0 amd64 1.35.0-1build1 [1,082 kB] Get:4 http://archive.ubuntu.com/ubuntu focal/universe amd64 aria2 amd64 1.35.0-1build1 [356 kB] Fetched 1,488 kB in 1s (2,438 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 4.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package netbase. (Reading database ... 123105 files and directories currently installed.) Preparing to unpack .../archives/netbase_6.1_all.deb ... Unpacking netbase (6.1) ... Selecting previously unselected package libc-ares2:amd64. Preparing to unpack .../libc-ares2_1.15.0-1ubuntu0.3_amd64.deb ... Unpacking libc-ares2:amd64 (1.15.0-1ubuntu0.3) ... Selecting previously unselected package libaria2-0:amd64. Preparing to unpack .../libaria2-0_1.35.0-1build1_amd64.deb ... Unpacking libaria2-0:amd64 (1.35.0-1build1) ... Selecting previously unselected package aria2. Preparing to unpack .../aria2_1.35.0-1build1_amd64.deb ... Unpacking aria2 (1.35.0-1build1) ... Setting up libc-ares2:amd64 (1.15.0-1ubuntu0.3) ... Setting up netbase (6.1) ... Setting up libaria2-0:amd64 (1.35.0-1build1) ... Setting up aria2 (1.35.0-1build1) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for libc-bin (2.31-0ubuntu9.9) ... /tools/node/bin/lt -> /tools/node/lib/node_modules/localtunnel/bin/lt.js
Downloading (…)lve/main/config.json: 100% 1.49k/1.49k [00:00<00:00, 6.92MB/s] TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:190 - {'use_gpu': True, '0_Layers': 32, 'CPU_Layers': 0, 'Disk_Layers': 0, 'use_4_bit': False, 'id': 'PygmalionAI/pygmalion-2.7b', 'model': 'PygmalionAI/pygmalion-2.7b', 'path': None, 'menu_path': ''} INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | KoboldAI is available at the following link for UI 1: https://carb-expensive-original-arrange.trycloudflare.com/ MESSAGE | KoboldAI is available at the following link for UI 2: https://carb-expensive-original-arrange.trycloudflare.com/new_ui MESSAGE | KoboldAI is available at the following link for KoboldAI Lite: https://carb-expensive-original-arrange.trycloudflare.com/lite MESSAGE | KoboldAI is available at the following link for the API: https://carb-expensive-original-arrange.trycloudflare.com/api INIT | Searching | GPU support INIT | Found | GPU support [aria2] Downloading model: 100%|##########| 5.44G/5.44G [00:21<00:00, 248MB/s] ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-1joc22uqt674f --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get
CUDA error: invalid device function
errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... Loading model tensors: 100%|##########| 484/484 [00:44<00:00, 10.78it/s] Downloading (…)okenizer_config.json: 100% 717/717 [00:00<00:00, 4.27MB/s]Downloading (…)olve/main/vocab.json: 0% 0.00/798k [00:00<?, ?B/s] Downloading (…)olve/main/vocab.json: 100% 798k/798k [00:00<00:00, 4.26MB/s]
Downloading (…)olve/main/merges.txt: 0% 0.00/456k [00:00<?, ?B/s] Downloading (…)olve/main/merges.txt: 100% 456k/456k [00:00<00:00, 3.68MB/s]
Downloading (…)cial_tokens_map.json: 100% 131/131 [00:00<00:00, 653kB/s] INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed MESSAGE | KoboldAI has finished loading and is available at the following link for UI 1: https://carb-expensive-original-arrange.trycloudflare.com/ MESSAGE | KoboldAI has finished loading and is available at the following link for UI 2: https://carb-expensive-original-arrange.trycloudflare.com/new_ui MESSAGE | KoboldAI has finished loading and is available at the following link for KoboldAI Lite: https://carb-expensive-original-arrange.trycloudflare.com/lite MESSAGE | KoboldAI has finished loading and is available at the following link for the API: https://carb-expensive-original-arrange.trycloudflare.com/api Connection Attempt: 127.0.0.1 INFO | main:do_connect:2608 - Client connected! UI_1 Connection Attempt: 127.0.0.1 INFO | main:do_connect:2608 - Client connected! UI_1 Connection Attempt: 127.0.0.1 INFO | main:do_connect:2608 - Client connected! UI_1 The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's
attention_mask
to obtain reliable results. Settingpad_token_id
toeos_token_id
:50256 for open-end generation. /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py:197: UserWarning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead. (Triggered internally at ../aten/src/ATen/native/TensorCompare.cpp:493.) attn_weights = torch.where(causal_mask, attn_weights, mask_value) The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input'sattention_mask
to obtain reliable results. Settingpad_token_id
toeos_token_id
:50256 for open-end generation. The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input'sattention_mask
to obtain reliable results. Settingpad_token_id
toeos_token_id
:50256 for open-end generation. The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input'sattention_mask
to obtain reliable results. Settingpad_token_id
toeos_token_id
:50256 for open-end generation. The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input'sattention_mask
to obtain reliable results. Settingpad_token_id
toeos_token_id
:50256 for open-end generation.