oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.84k stars 5.34k forks source link

Windows installer broken #4387

Closed SoftologyPro closed 11 months ago

SoftologyPro commented 1 year ago

Describe the bug

Windows installer broken.

md oobobooga
cd oobabooga
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
start_windows.bat

Is there an existing issue for this?

Reproduction

md oobobooga
cd oobabooga
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
start_windows.bat

Select NVIDIA and CUDA 11.8 during install.

Edit: since posting this error I installed using NVIDIA and CUDA 12.1 (I have CUDA 11.8) and it worked! Shouldn't 12.1 need CUDA 12.1 installed? So if the 12.1 option works with CUDA 11.8, maybe just use 12.1 by default and do not prompt the user.

Screenshot

No response

Logs

D:\>md oobabooga

D:\>cd oobabooga

D:\oobabooga>git clone https://github.com/oobabooga/text-generation-webui
Cloning into 'text-generation-webui'...
remote: Enumerating objects: 13325, done.
remote: Counting objects: 100% (2485/2485), done.
remote: Compressing objects: 100% (337/337), done.

Resolving deltas:   1% (92/9109)
Resolving deltas: 100% (9109/9109), done.

D:\oobabooga>cd text-generation-webui

D:\oobabooga\text-generation-webui>start_windows.bat
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to D:\oobabooga\text-generation-webui\installer_files\miniconda_installer.exe
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 53.8M  100 53.8M    0     0  9040k      0  0:00:06  0:00:06 --:--:-- 9999k
Installing Miniconda to D:\oobabooga\text-generation-webui\installer_files\conda
Miniconda version:
conda 22.11.1
Packages to install:
Collecting package metadata (current_repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: D:\oobabooga\text-generation-webui\installer_files\env

  added / updated specs:
    - python=3.11

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    ca-certificates-2023.08.22 |       haa95532_0         123 KB
    libffi-3.4.4               |       hd77b12b_0         113 KB
    openssl-3.0.11             |       h2bbff1b_2         7.4 MB
    pip-23.3                   |  py311haa95532_0         3.5 MB
    python-3.11.5              |       he1021f5_0        18.0 MB
    setuptools-68.0.0          |  py311haa95532_0         1.2 MB
    sqlite-3.41.2              |       h2bbff1b_0         894 KB
    wheel-0.41.2               |  py311haa95532_0         163 KB
    xz-5.4.2                   |       h8cc25b3_0         592 KB
    ------------------------------------------------------------
                                           Total:        32.0 MB

The following NEW packages will be INSTALLED:

  bzip2              pkgs/main/win-64::bzip2-1.0.8-he774522_0
  ca-certificates    pkgs/main/win-64::ca-certificates-2023.08.22-haa95532_0
  libffi             pkgs/main/win-64::libffi-3.4.4-hd77b12b_0
  openssl            pkgs/main/win-64::openssl-3.0.11-h2bbff1b_2
  pip                pkgs/main/win-64::pip-23.3-py311haa95532_0
  python             pkgs/main/win-64::python-3.11.5-he1021f5_0
  setuptools         pkgs/main/win-64::setuptools-68.0.0-py311haa95532_0
  sqlite             pkgs/main/win-64::sqlite-3.41.2-h2bbff1b_0
  tk                 pkgs/main/win-64::tk-8.6.12-h2bbff1b_0
  tzdata             pkgs/main/noarch::tzdata-2023c-h04d1e81_0
  vc                 pkgs/main/win-64::vc-14.2-h21ff451_1
  vs2015_runtime     pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2
  wheel              pkgs/main/win-64::wheel-0.41.2-py311haa95532_0
  xz                 pkgs/main/win-64::xz-5.4.2-h8cc25b3_0
  zlib               pkgs/main/win-64::zlib-1.2.13-h8cc25b3_0

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate D:\oobabooga\text-generation-webui\installer_files\env
#
# To deactivate an active environment, use
#
#     $ conda deactivate

What is your GPU?

A) NVIDIA
B) AMD (Linux/MacOS only. Requires ROCm SDK 5.6 on Linux)
C) Apple M Series
D) Intel Arc (IPEX)
N) None (I want to run models in CPU mode)

Input> A

Would you like to use CUDA 11.8 instead of 12.1? This is only necessary for older GPUs like Kepler.
If unsure, say "N".

Input (Y/N)> Y
CUDA: 11.8
Collecting package metadata (current_repodata.json): done
Solving environment: done

==> WARNING: A newer version of conda exists. <==
  current version: 23.3.1
  latest version: 23.9.0

Please update conda by running

    $ conda update -n base -c defaults conda

Or to minimize the number of packages updated during conda update use

     conda install conda=23.9.0

## Package Plan ##

  environment location: D:\oobabooga\text-generation-webui\installer_files\env

  added / updated specs:
    - git
    - ninja

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    git-2.40.1                 |       haa95532_1        69.2 MB
    ninja-1.10.2               |       haa95532_5          14 KB
    ninja-base-1.10.2          |       h6d14046_5         255 KB
    ------------------------------------------------------------
                                           Total:        69.5 MB

The following NEW packages will be INSTALLED:

  git                pkgs/main/win-64::git-2.40.1-haa95532_1
  ninja              pkgs/main/win-64::ninja-1.10.2-haa95532_5
  ninja-base         pkgs/main/win-64::ninja-base-1.10.2-h6d14046_5

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Looking in indexes: https://download.pytorch.org/whl/cu118
Collecting torch
  Using cached https://download.pytorch.org/whl/cu118/torch-2.1.0%2Bcu118-cp311-cp311-win_amd64.whl (2722.7 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu118/torchvision-0.16.0%2Bcu118-cp311-cp311-win_amd64.whl (5.0 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu118/torchaudio-2.1.0%2Bcu118-cp311-cp311-win_amd64.whl (3.9 MB)
Collecting filelock (from torch)
  Using cached https://download.pytorch.org/whl/filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting typing-extensions (from torch)
  Using cached https://download.pytorch.org/whl/typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting sympy (from torch)
  Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch)
  Using cached https://download.pytorch.org/whl/networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting jinja2 (from torch)
  Using cached https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting fsspec (from torch)
  Using cached https://download.pytorch.org/whl/fsspec-2023.4.0-py3-none-any.whl (153 kB)
Collecting numpy (from torchvision)
  Using cached https://download.pytorch.org/whl/numpy-1.24.1-cp311-cp311-win_amd64.whl (14.8 MB)
Collecting requests (from torchvision)
  Using cached https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Using cached https://download.pytorch.org/whl/Pillow-9.3.0-cp311-cp311-win_amd64.whl (2.5 MB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl (16 kB)
Collecting charset-normalizer<3,>=2 (from requests->torchvision)
  Using cached https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<1.27,>=1.21.1 (from requests->torchvision)
  Using cached https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Using cached https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting mpmath>=0.19 (from sympy->torch)
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.2 certifi-2022.12.7 charset-normalizer-2.1.1 filelock-3.9.0 fsspec-2023.4.0 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.0 numpy-1.24.1 pillow-9.3.0 requests-2.28.1 sympy-1.12 torch-2.1.0+cu118 torchaudio-2.1.0+cu118 torchvision-0.16.0+cu118 typing-extensions-4.4.0 urllib3-1.26.13
Collecting py-cpuinfo==9.0.0
  Using cached py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Installing collected packages: py-cpuinfo
Successfully installed py-cpuinfo-9.0.0
Collecting package metadata (current_repodata.json): done
Solving environment: done

==> WARNING: A newer version of conda exists. <==
  current version: 23.3.1
  latest version: 23.9.0

Please update conda by running

    $ conda update -n base -c defaults conda

Or to minimize the number of packages updated during conda update use

     conda install conda=23.9.0

## Package Plan ##

  environment location: D:\oobabooga\text-generation-webui\installer_files\env

  added / updated specs:
    - cuda-runtime

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    cuda-cudart-11.8.89        |                0         1.4 MB  nvidia/label/cuda-11.8.0
    cuda-libraries-11.8.0      |                0           1 KB  nvidia/label/cuda-11.8.0
    cuda-nvrtc-11.8.89         |                0        72.1 MB  nvidia/label/cuda-11.8.0
    cuda-runtime-11.8.0        |                0           1 KB  nvidia/label/cuda-11.8.0
    libcublas-11.11.3.6        |                0          33 KB  nvidia/label/cuda-11.8.0
    libcufft-10.9.0.58         |                0           6 KB  nvidia/label/cuda-11.8.0
    libcurand-10.3.0.86        |                0           3 KB  nvidia/label/cuda-11.8.0
    libcusolver-11.4.1.48      |                0          29 KB  nvidia/label/cuda-11.8.0
    libcusparse-11.7.5.86      |                0          13 KB  nvidia/label/cuda-11.8.0
    libnpp-11.8.0.86           |                0         294 KB  nvidia/label/cuda-11.8.0
    libnvjpeg-11.9.0.86        |                0           4 KB  nvidia/label/cuda-11.8.0
    ------------------------------------------------------------
                                           Total:        73.9 MB

The following NEW packages will be INSTALLED:

  cuda-cudart        nvidia/label/cuda-11.8.0/win-64::cuda-cudart-11.8.89-0
  cuda-libraries     nvidia/label/cuda-11.8.0/win-64::cuda-libraries-11.8.0-0
  cuda-nvrtc         nvidia/label/cuda-11.8.0/win-64::cuda-nvrtc-11.8.89-0
  cuda-runtime       nvidia/label/cuda-11.8.0/win-64::cuda-runtime-11.8.0-0
  libcublas          nvidia/label/cuda-11.8.0/win-64::libcublas-11.11.3.6-0
  libcufft           nvidia/label/cuda-11.8.0/win-64::libcufft-10.9.0.58-0
  libcurand          nvidia/label/cuda-11.8.0/win-64::libcurand-10.3.0.86-0
  libcusolver        nvidia/label/cuda-11.8.0/win-64::libcusolver-11.4.1.48-0
  libcusparse        nvidia/label/cuda-11.8.0/win-64::libcusparse-11.7.5.86-0
  libnpp             nvidia/label/cuda-11.8.0/win-64::libnpp-11.8.0.86-0
  libnvjpeg          nvidia/label/cuda-11.8.0/win-64::libnvjpeg-11.9.0.86-0

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Already up to date.

*******************************************************************
* Installing extensions requirements.
*******************************************************************

Collecting flask_cloudflared==0.0.14 (from -r extensions\api\requirements.txt (line 1))
  Using cached flask_cloudflared-0.0.14-py3-none-any.whl.metadata (4.6 kB)
Collecting websockets==11.0.2 (from -r extensions\api\requirements.txt (line 2))
  Using cached websockets-11.0.2-cp311-cp311-win_amd64.whl (124 kB)
Collecting Flask>=0.8 (from flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached flask-3.0.0-py3-none-any.whl.metadata (3.6 kB)
Requirement already satisfied: requests in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (2.28.1)
Collecting Werkzeug>=3.0.0 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached werkzeug-3.0.1-py3-none-any.whl.metadata (4.1 kB)
Requirement already satisfied: Jinja2>=3.1.2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (3.1.2)
Collecting itsdangerous>=2.1.2 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting click>=8.1.3 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting blinker>=1.6.2 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached blinker-1.6.3-py3-none-any.whl.metadata (1.9 kB)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (2022.12.7)
Collecting colorama (from click>=8.1.3->Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1))
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Requirement already satisfied: MarkupSafe>=2.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Jinja2>=3.1.2->Flask>=0.8->flask_cloudflared==0.0.14->-r extensions\api\requirements.txt (line 1)) (2.1.2)
Using cached flask_cloudflared-0.0.14-py3-none-any.whl (6.4 kB)
Using cached flask-3.0.0-py3-none-any.whl (99 kB)
Using cached blinker-1.6.3-py3-none-any.whl (13 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached werkzeug-3.0.1-py3-none-any.whl (226 kB)
Installing collected packages: Werkzeug, websockets, itsdangerous, colorama, blinker, click, Flask, flask_cloudflared
Successfully installed Flask-3.0.0 Werkzeug-3.0.1 blinker-1.6.3 click-8.1.7 colorama-0.4.6 flask_cloudflared-0.0.14 itsdangerous-2.1.2 websockets-11.0.2
Collecting elevenlabs==0.2.24 (from -r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached elevenlabs-0.2.24-py3-none-any.whl.metadata (811 bytes)
Collecting pydantic<2.0,>=1.10 (from elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached pydantic-1.10.13-cp311-cp311-win_amd64.whl.metadata (150 kB)
Collecting ipython>=7.0 (from elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached ipython-8.16.1-py3-none-any.whl.metadata (5.9 kB)
Requirement already satisfied: requests>=2.20 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (2.28.1)
Requirement already satisfied: websockets>=11.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (11.0.2)
Collecting backcall (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting decorator (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting jedi>=0.16 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached jedi-0.19.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting matplotlib-inline (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached matplotlib_inline-0.1.6-py3-none-any.whl (9.4 kB)
Collecting pickleshare (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached prompt_toolkit-3.0.39-py3-none-any.whl.metadata (6.4 kB)
Collecting pygments>=2.4.0 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached Pygments-2.16.1-py3-none-any.whl.metadata (2.5 kB)
Collecting stack-data (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached stack_data-0.6.3-py3-none-any.whl.metadata (18 kB)
Collecting traitlets>=5 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached traitlets-5.12.0-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: colorama in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: typing-extensions>=4.2.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from pydantic<2.0,>=1.10->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (4.4.0)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (2022.12.7)
Collecting parso<0.9.0,>=0.8.3 (from jedi>=0.16->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached parso-0.8.3-py2.py3-none-any.whl (100 kB)
Collecting wcwidth (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached wcwidth-0.2.8-py2.py3-none-any.whl.metadata (13 kB)
Collecting executing>=1.2.0 (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached executing-2.0.0-py2.py3-none-any.whl.metadata (9.0 kB)
Collecting asttokens>=2.1.0 (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached asttokens-2.4.0-py2.py3-none-any.whl.metadata (4.9 kB)
Collecting pure-eval (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached pure_eval-0.2.2-py3-none-any.whl (11 kB)
Collecting six>=1.12.0 (from asttokens>=2.1.0->stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions\elevenlabs_tts\requirements.txt (line 1))
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached elevenlabs-0.2.24-py3-none-any.whl (16 kB)
Using cached ipython-8.16.1-py3-none-any.whl (806 kB)
Using cached pydantic-1.10.13-cp311-cp311-win_amd64.whl (2.1 MB)
Using cached jedi-0.19.1-py2.py3-none-any.whl (1.6 MB)
Using cached prompt_toolkit-3.0.39-py3-none-any.whl (385 kB)
Using cached Pygments-2.16.1-py3-none-any.whl (1.2 MB)
Using cached traitlets-5.12.0-py3-none-any.whl (84 kB)
Using cached stack_data-0.6.3-py3-none-any.whl (24 kB)
Using cached asttokens-2.4.0-py2.py3-none-any.whl (27 kB)
Using cached executing-2.0.0-py2.py3-none-any.whl (24 kB)
Using cached wcwidth-0.2.8-py2.py3-none-any.whl (31 kB)
Installing collected packages: wcwidth, pure-eval, pickleshare, executing, backcall, traitlets, six, pygments, pydantic, prompt-toolkit, parso, decorator, matplotlib-inline, jedi, asttokens, stack-data, ipython, elevenlabs
Successfully installed asttokens-2.4.0 backcall-0.2.0 decorator-5.1.1 elevenlabs-0.2.24 executing-2.0.0 ipython-8.16.1 jedi-0.19.1 matplotlib-inline-0.1.6 parso-0.8.3 pickleshare-0.7.5 prompt-toolkit-3.0.39 pure-eval-0.2.2 pydantic-1.10.13 pygments-2.16.1 six-1.16.0 stack-data-0.6.3 traitlets-5.12.0 wcwidth-0.2.8
Collecting deep-translator==1.9.2 (from -r extensions\google_translate\requirements.txt (line 1))
  Using cached deep_translator-1.9.2-py3-none-any.whl (30 kB)
Collecting beautifulsoup4<5.0.0,>=4.9.1 (from deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1))
  Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
Requirement already satisfied: requests<3.0.0,>=2.23.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2.28.1)
Collecting soupsieve>1.2 (from beautifulsoup4<5.0.0,>=4.9.1->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1))
  Using cached soupsieve-2.5-py3-none-any.whl.metadata (4.7 kB)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2022.12.7)
Using cached soupsieve-2.5-py3-none-any.whl (36 kB)
Installing collected packages: soupsieve, beautifulsoup4, deep-translator
Successfully installed beautifulsoup4-4.12.2 deep-translator-1.9.2 soupsieve-2.5
Collecting ngrok==0.* (from -r extensions\ngrok\requirements.txt (line 1))
  Using cached ngrok-0.12.0-cp37-abi3-win_amd64.whl.metadata (17 kB)
Using cached ngrok-0.12.0-cp37-abi3-win_amd64.whl (3.0 MB)
Installing collected packages: ngrok
Successfully installed ngrok-0.12.0
Collecting SpeechRecognition==3.10.0 (from -r extensions\openai\requirements.txt (line 1))
  Using cached SpeechRecognition-3.10.0-py2.py3-none-any.whl (32.8 MB)
Collecting flask_cloudflared==0.0.12 (from -r extensions\openai\requirements.txt (line 2))
  Using cached flask_cloudflared-0.0.12-py3-none-any.whl (6.3 kB)
Collecting sentence-transformers (from -r extensions\openai\requirements.txt (line 3))
  Using cached sentence_transformers-2.2.2-py3-none-any.whl
Collecting tiktoken (from -r extensions\openai\requirements.txt (line 4))
  Using cached tiktoken-0.5.1-cp311-cp311-win_amd64.whl.metadata (6.8 kB)
Requirement already satisfied: requests>=2.26.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from SpeechRecognition==3.10.0->-r extensions\openai\requirements.txt (line 1)) (2.28.1)
Requirement already satisfied: Flask>=0.8 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (3.0.0)
Collecting transformers<5.0.0,>=4.6.0 (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached transformers-4.34.1-py3-none-any.whl.metadata (121 kB)
Collecting tqdm (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Requirement already satisfied: torch>=1.6.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (2.1.0+cu118)
Requirement already satisfied: torchvision in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (0.16.0+cu118)
Requirement already satisfied: numpy in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (1.24.1)
Collecting scikit-learn (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached scikit_learn-1.3.2-cp311-cp311-win_amd64.whl.metadata (11 kB)
Collecting scipy (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached scipy-1.11.3-cp311-cp311-win_amd64.whl.metadata (60 kB)
Collecting nltk (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB)
Collecting sentencepiece (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached sentencepiece-0.1.99-cp311-cp311-win_amd64.whl (977 kB)
Collecting huggingface-hub>=0.4.0 (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached huggingface_hub-0.18.0-py3-none-any.whl.metadata (13 kB)
Collecting regex>=2022.1.18 (from tiktoken->-r extensions\openai\requirements.txt (line 4))
  Using cached regex-2023.10.3-cp311-cp311-win_amd64.whl.metadata (41 kB)
Requirement already satisfied: Werkzeug>=3.0.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (3.0.1)
Requirement already satisfied: Jinja2>=3.1.2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (3.1.2)
Requirement already satisfied: itsdangerous>=2.1.2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (2.1.2)
Requirement already satisfied: click>=8.1.3 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (8.1.7)
Requirement already satisfied: blinker>=1.6.2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (1.6.3)
Requirement already satisfied: filelock in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from huggingface-hub>=0.4.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (3.9.0)
Collecting fsspec>=2023.5.0 (from huggingface-hub>=0.4.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached fsspec-2023.10.0-py3-none-any.whl.metadata (6.8 kB)
Collecting pyyaml>=5.1 (from huggingface-hub>=0.4.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached PyYAML-6.0.1-cp311-cp311-win_amd64.whl.metadata (2.1 kB)
Requirement already satisfied: typing-extensions>=3.7.4.3 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from huggingface-hub>=0.4.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (4.4.0)
Collecting packaging>=20.9 (from huggingface-hub>=0.4.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\openai\requirements.txt (line 1)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\openai\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\openai\requirements.txt (line 1)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\openai\requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: sympy in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from torch>=1.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (1.12)
Requirement already satisfied: networkx in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from torch>=1.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (3.0)
Requirement already satisfied: colorama in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from tqdm->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (0.4.6)
Collecting tokenizers<0.15,>=0.14 (from transformers<5.0.0,>=4.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached tokenizers-0.14.1-cp311-none-win_amd64.whl.metadata (6.8 kB)
Collecting safetensors>=0.3.1 (from transformers<5.0.0,>=4.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached safetensors-0.4.0-cp311-none-win_amd64.whl.metadata (3.8 kB)
Collecting joblib (from nltk->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached joblib-1.3.2-py3-none-any.whl.metadata (5.4 kB)
Collecting threadpoolctl>=2.0.0 (from scikit-learn->sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached threadpoolctl-3.2.0-py3-none-any.whl.metadata (10.0 kB)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from torchvision->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (9.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from Jinja2>=3.1.2->Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\openai\requirements.txt (line 2)) (2.1.2)
Collecting huggingface-hub>=0.4.0 (from sentence-transformers->-r extensions\openai\requirements.txt (line 3))
  Using cached huggingface_hub-0.17.3-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: fsspec in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from torch>=1.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (2023.4.0)
Requirement already satisfied: mpmath>=0.19 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from sympy->torch>=1.6.0->sentence-transformers->-r extensions\openai\requirements.txt (line 3)) (1.3.0)
Using cached tiktoken-0.5.1-cp311-cp311-win_amd64.whl (759 kB)
Using cached regex-2023.10.3-cp311-cp311-win_amd64.whl (269 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached transformers-4.34.1-py3-none-any.whl (7.7 MB)
Using cached scikit_learn-1.3.2-cp311-cp311-win_amd64.whl (9.2 MB)
Using cached scipy-1.11.3-cp311-cp311-win_amd64.whl (44.1 MB)
Using cached fsspec-2023.10.0-py3-none-any.whl (166 kB)
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Using cached PyYAML-6.0.1-cp311-cp311-win_amd64.whl (144 kB)
Using cached safetensors-0.4.0-cp311-none-win_amd64.whl (277 kB)
Using cached threadpoolctl-3.2.0-py3-none-any.whl (15 kB)
Using cached tokenizers-0.14.1-cp311-none-win_amd64.whl (2.2 MB)
Using cached huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
Installing collected packages: sentencepiece, tqdm, threadpoolctl, scipy, safetensors, regex, pyyaml, packaging, joblib, fsspec, tiktoken, SpeechRecognition, scikit-learn, nltk, huggingface-hub, tokenizers, flask_cloudflared, transformers, sentence-transformers
  Attempting uninstall: fsspec
    Found existing installation: fsspec 2023.4.0
    Uninstalling fsspec-2023.4.0:
      Successfully uninstalled fsspec-2023.4.0
  Attempting uninstall: flask_cloudflared
    Found existing installation: flask-cloudflared 0.0.14
    Uninstalling flask-cloudflared-0.0.14:
      Successfully uninstalled flask-cloudflared-0.0.14
Successfully installed SpeechRecognition-3.10.0 flask_cloudflared-0.0.12 fsspec-2023.10.0 huggingface-hub-0.17.3 joblib-1.3.2 nltk-3.8.1 packaging-23.2 pyyaml-6.0.1 regex-2023.10.3 safetensors-0.4.0 scikit-learn-1.3.2 scipy-1.11.3 sentence-transformers-2.2.2 sentencepiece-0.1.99 threadpoolctl-3.2.0 tiktoken-0.5.1 tokenizers-0.14.1 tqdm-4.66.1 transformers-4.34.1
Requirement already satisfied: ipython in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 1)) (8.16.1)
Collecting num2words (from -r extensions\silero_tts\requirements.txt (line 2))
  Using cached num2words-0.5.13-py3-none-any.whl.metadata (12 kB)
Collecting omegaconf (from -r extensions\silero_tts\requirements.txt (line 3))
  Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Collecting pydub (from -r extensions\silero_tts\requirements.txt (line 4))
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Requirement already satisfied: PyYAML in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 5)) (6.0.1)
Requirement already satisfied: backcall in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.0)
Requirement already satisfied: decorator in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (5.1.1)
Requirement already satisfied: jedi>=0.16 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.19.1)
Requirement already satisfied: matplotlib-inline in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.1.6)
Requirement already satisfied: pickleshare in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.7.5)
Requirement already satisfied: prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (3.0.39)
Requirement already satisfied: pygments>=2.4.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (2.16.1)
Requirement already satisfied: stack-data in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.6.3)
Requirement already satisfied: traitlets>=5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (5.12.0)
Requirement already satisfied: colorama in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.4.6)
Collecting docopt>=0.6.2 (from num2words->-r extensions\silero_tts\requirements.txt (line 2))
  Using cached docopt-0.6.2-py2.py3-none-any.whl
Collecting antlr4-python3-runtime==4.9.* (from omegaconf->-r extensions\silero_tts\requirements.txt (line 3))
  Using cached antlr4_python3_runtime-4.9.3-py3-none-any.whl
Requirement already satisfied: parso<0.9.0,>=0.8.3 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from jedi>=0.16->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.8.3)
Requirement already satisfied: wcwidth in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.8)
Requirement already satisfied: executing>=1.2.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (2.0.0)
Requirement already satisfied: asttokens>=2.1.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (2.4.0)
Requirement already satisfied: pure-eval in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.2)
Requirement already satisfied: six>=1.12.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from asttokens>=2.1.0->stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (1.16.0)
Using cached num2words-0.5.13-py3-none-any.whl (143 kB)
Installing collected packages: pydub, docopt, antlr4-python3-runtime, omegaconf, num2words
Successfully installed antlr4-python3-runtime-4.9.3 docopt-0.6.2 num2words-0.5.13 omegaconf-2.3.0 pydub-0.25.1
Collecting git+https://github.com/oobabooga/whisper.git (from -r extensions\whisper_stt\requirements.txt (line 2))
  Cloning https://github.com/oobabooga/whisper.git to d:\oobabooga\text-generation-webui\installer_files\pip-req-build-twxf8rm1
  Running command git clone --filter=blob:none --quiet https://github.com/oobabooga/whisper.git 'D:\oobabooga\text-generation-webui\installer_files\pip-req-build-twxf8rm1'
  Resolved https://github.com/oobabooga/whisper.git to commit 958ee4f6e1e65425ba02c440fc083089d58f5c71
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: SpeechRecognition==3.10.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from -r extensions\whisper_stt\requirements.txt (line 1)) (3.10.0)
Collecting soundfile (from -r extensions\whisper_stt\requirements.txt (line 3))
  Using cached soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
Collecting ffmpeg (from -r extensions\whisper_stt\requirements.txt (line 4))
  Using cached ffmpeg-1.4-py3-none-any.whl
Requirement already satisfied: requests>=2.26.0 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from SpeechRecognition==3.10.0->-r extensions\whisper_stt\requirements.txt (line 1)) (2.28.1)
Collecting numba (from openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2))
  Using cached numba-0.58.1-cp311-cp311-win_amd64.whl.metadata (2.8 kB)
Requirement already satisfied: numpy in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2)) (1.24.1)
Requirement already satisfied: tqdm in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2)) (4.66.1)
Collecting more-itertools (from openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2))
  Using cached more_itertools-10.1.0-py3-none-any.whl.metadata (33 kB)
Collecting tiktoken==0.3.3 (from openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2))
  Using cached tiktoken-0.3.3-cp311-cp311-win_amd64.whl (579 kB)
Requirement already satisfied: regex>=2022.1.18 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from tiktoken==0.3.3->openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2)) (2023.10.3)
Collecting cffi>=1.0 (from soundfile->-r extensions\whisper_stt\requirements.txt (line 3))
  Using cached cffi-1.16.0-cp311-cp311-win_amd64.whl.metadata (1.5 kB)
Collecting pycparser (from cffi>=1.0->soundfile->-r extensions\whisper_stt\requirements.txt (line 3))
  Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\whisper_stt\requirements.txt (line 1)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\whisper_stt\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\whisper_stt\requirements.txt (line 1)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions\whisper_stt\requirements.txt (line 1)) (2022.12.7)
Collecting llvmlite<0.42,>=0.41.0dev0 (from numba->openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2))
  Using cached llvmlite-0.41.1-cp311-cp311-win_amd64.whl.metadata (4.9 kB)
Requirement already satisfied: colorama in d:\oobabooga\text-generation-webui\installer_files\env\lib\site-packages (from tqdm->openai-whisper==20230918->-r extensions\whisper_stt\requirements.txt (line 2)) (0.4.6)
Using cached cffi-1.16.0-cp311-cp311-win_amd64.whl (181 kB)
Using cached more_itertools-10.1.0-py3-none-any.whl (55 kB)
Using cached numba-0.58.1-cp311-cp311-win_amd64.whl (2.6 MB)
Using cached llvmlite-0.41.1-cp311-cp311-win_amd64.whl (28.1 MB)
Building wheels for collected packages: openai-whisper
  Building wheel for openai-whisper (pyproject.toml) ... done
  Created wheel for openai-whisper: filename=openai_whisper-20230918-py3-none-any.whl size=807763 sha256=50f5c8945ffd23b0bbebc4e704d8eca966bf573690ae93dbde38a77cd0a9aaff
  Stored in directory: D:\oobabooga\text-generation-webui\installer_files\pip-ephem-wheel-cache-t9q4flnf\wheels\35\e7\4f\cd878f35d6cb5bf819c592f299ff25b6c0cf5a74e1c6576eba
Successfully built openai-whisper
Installing collected packages: ffmpeg, pycparser, more-itertools, llvmlite, tiktoken, numba, cffi, soundfile, openai-whisper
  Attempting uninstall: tiktoken
    Found existing installation: tiktoken 0.5.1
    Uninstalling tiktoken-0.5.1:
      Successfully uninstalled tiktoken-0.5.1
Successfully installed cffi-1.16.0 ffmpeg-1.4 llvmlite-0.41.1 more-itertools-10.1.0 numba-0.58.1 openai-whisper-20230918 pycparser-2.21 soundfile-0.12.1 tiktoken-0.3.3
TORCH: 2.1.0+cu118

*******************************************************************
* Installing webui requirements from file: requirements.txt
*******************************************************************

WARNING: Skipping torch-grammar as it is not installed.
Uninstalled torch-grammar
Collecting git+https://github.com/oobabooga/torch-grammar.git (from -r temp_requirements.txt (line 23))
  Cloning https://github.com/oobabooga/torch-grammar.git to d:\oobabooga\text-generation-webui\installer_files\pip-req-build-4s628uo6
  Running command git clone --filter=blob:none --quiet https://github.com/oobabooga/torch-grammar.git 'D:\oobabooga\text-generation-webui\installer_files\pip-req-build-4s628uo6'
  Resolved https://github.com/oobabooga/torch-grammar.git to commit 82850b5383a629f3b0fa1fba7d8f2aba3185ddb2
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Ignoring bitsandbytes: markers 'platform_system != "Windows"' don't match your environment
Collecting bitsandbytes==0.41.1 (from -r temp_requirements.txt (line 27))
  Downloading https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl (152.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 152.7/152.7 MB 6.7 MB/s eta 0:00:00
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Collecting llama-cpp-python==0.2.11 (from -r temp_requirements.txt (line 34))
  Downloading https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.11/llama_cpp_python-0.2.11-cp311-cp311-win_amd64.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 7.6 MB/s eta 0:00:00
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Collecting auto-gptq==0.4.2+cu118 (from -r temp_requirements.txt (line 40))
  Downloading https://github.com/jllllll/AutoGPTQ/releases/download/v0.4.2/auto_gptq-0.4.2+cu118-cp311-cp311-win_amd64.whl (1.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 7.5 MB/s eta 0:00:00
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Collecting exllama==0.0.18+cu118 (from -r temp_requirements.txt (line 48))
  Downloading https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu118-cp311-cp311-win_amd64.whl (443 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 443.4/443.4 kB 9.2 MB/s eta 0:00:00
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Collecting exllamav2==0.0.6+cu118 (from -r temp_requirements.txt (line 56))
  Downloading https://github.com/turboderp/exllamav2/releases/download/v0.0.6/exllamav2-0.0.6+cu118-cp311-cp311-win_amd64.whl (12.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 5.9 MB/s eta 0:00:00
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Collecting flash-attn==2.3.2+cu118 (from -r temp_requirements.txt (line 64))
  ERROR: HTTP error 404 while getting https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu118-cp311-cp311-win_amd64.whl
ERROR: Could not install requirement flash-attn==2.3.2+cu118 from https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu118-cp311-cp311-win_amd64.whl (from -r temp_requirements.txt (line 64)) because of HTTP error 404 Client Error: Not Found for url: https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu118-cp311-cp311-win_amd64.whl for URL https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu118-cp311-cp311-win_amd64.whl
Command '"D:\oobabooga\text-generation-webui\installer_files\conda\condabin\conda.bat" activate "D:\oobabooga\text-generation-webui\installer_files\env" >nul && python -m pip install -r temp_requirements.txt --upgrade' failed with exit status code '1'.

Exiting now.
Try running the start/update script again.
Press any key to continue . . .

(D:\oobabooga\text-generation-webui\installer_files\env) D:\oobabooga\text-generation-webui>start_windows.bat

*******************************************************************
* WARNING: You haven't downloaded any model yet.
* Once the web UI launches, head over to the "Model" tab and download one.
*******************************************************************

Traceback (most recent call last):
  File "D:\oobabooga\text-generation-webui\server.py", line 14, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Press any key to continue . . .

(D:\oobabooga\text-generation-webui\installer_files\env) D:\oobabooga\text-generation-webui>

System Info

Windows 11.  4090 GPU.  CUDA 11.8.
Trimad commented 1 year ago

I'm running into the same issue. One thing I'm noticing is that grammar.py is importing "torch_grammar" whereas the update script is installing "torch-grammar".

IJumpAround commented 1 year ago

See #4369

erew123 commented 1 year ago

Looks like the releases are only for version 12 of CUDA now. https://github.com/bdashore3/flash-attention/releases

And the above installer log error, is trying to download 11.8 Cuda flash attention https://github.com/bdashore3/flash-attention/releases/download/2.3.2-2/flash_attn-2.3.2+cu118-cp311-cp311-win_amd64.whl

And the files are moved here for 11.8 etc.... (according to the top of the page here github.com/bdashore3/flash-attention/releases) https://github.com/Dao-AILab/flash-attention/releases/tag/v2.3.2

rosx27 commented 1 year ago

did anyone find a solution for this? I'm having the same issue. this are what I tried so far:

  1. activate the:
    \text-generation-webui\installer_files\conda\Scripts\activate.bat

and then,

pip install gradio

(gradio installed but still having the same issue when I run 'start_windows.bat')

  1. add 'gradio' in requirements.txt
  2. re-run 'start_windows.bat' and 'update_windows.bat'

none worked.

EDIT: so I run:

\text-generation-webui\installer_files\env\Scripts\pip.exe install gradio

then run the 'update_windows.bat' and 'start_windows.bat'. now it works.

github-actions[bot] commented 11 months ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.