oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.92k stars 5.34k forks source link

UserWarning: MPS: no support for int64 repeats mask, casting it to int32 on MacOS 13.3 #1686

Closed M00N-MAN closed 1 year ago

M00N-MAN commented 1 year ago

Describe the bug

this is a continuation of https://github.com/oobabooga/text-generation-webui/issues/428

i'm following instruction for one click installer for macos https://github.com/oobabooga/one-click-installers

and have RuntimeError: MPS does not support cumsum op with int64 input always on any model

Is there an existing issue for this?

Reproduction

./update_macos.sh ./start_macos.sh

Screenshot

image

Logs

oobabooga_macos % ./start_macos.sh
Gradio HTTP request redirected to localhost :)
bin /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

1. huggyllama_llama-30b
2. jeffwan_vicuna-13b

Which one do you want to load? 1-2

2

Loading jeffwan_vicuna-13b...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00,  1.75s/it]
Loaded the model in 8.22 seconds.
Loading the extension "gallery"... Ok.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:690: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1678454852765/work/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/callbacks.py", line 66, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/text_generation.py", line 290, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 736, in prepare_inputs_for_generation
    position_ids = attention_mask.long().cumsum(-1) - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 2.18 seconds (0.00 tokens/s, 0 tokens, context 23, seed 174136288)
^CTraceback (most recent call last):
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/server.py", line 929, in <module>
    time.sleep(0.5)
KeyboardInterrupt
Traceback (most recent call last):
  File "/Users/master/sandbox/oobabooga_macos/webui.py", line 171, in <module>
    run_model()
  File "/Users/master/sandbox/oobabooga_macos/webui.py", line 146, in run_model
    run_cmd("python server.py --chat --model-menu")  # put your flags here!
  File "/Users/master/sandbox/oobabooga_macos/webui.py", line 14, in run_cmd
    return subprocess.run(cmd, shell=True, capture_output=capture_output, env=env)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/subprocess.py", line 505, in run
    stdout, stderr = process.communicate(input, timeout=timeout)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/subprocess.py", line 1146, in communicate
    self.wait()
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/subprocess.py", line 1209, in wait
    return self._wait(timeout=timeout)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/subprocess.py", line 1959, in _wait
    (pid, sts) = self._try_wait(0)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/subprocess.py", line 1917, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

(base) master@Nebuchadnezzar oobabooga_macos % ./start_macos.sh  
Gradio HTTP request redirected to localhost :)
bin /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

1. huggyllama_llama-30b
2. jeffwan_vicuna-13b

Which one do you want to load? 1-2

1

Loading huggyllama_llama-30b...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.79it/s]
Loaded the model in 151.83 seconds.
Loading the extension "gallery"... Ok.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:690: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1678454852765/work/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/callbacks.py", line 66, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/text_generation.py", line 290, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 736, in prepare_inputs_for_generation
    position_ids = attention_mask.long().cumsum(-1) - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 0.32 seconds (0.00 tokens/s, 0 tokens, context 36, seed 1186143389)

Full log

comment: i substitute following by symlinks

  1. huggyllama_llama-30b
  2. jeffwan_vicuna-13b

oobabooga_macos % du -hs /Users/master/sandbox/jeffwan_vicuna-13b
25G /Users/master/sandbox/jeffwan_vicuna-13b oobabooga_macos % du -hs /Users/master/sandbox/huggyllama_llama-30b 61G /Users/master/sandbox/huggyllama_llama-30b

oobabooga_macos % find text-generation-webui/models -type l -exec ls -lhas {} \;|awk '{$1=$2=$3=$4=$5=$6="";print $0}'|sed -E 's/^ +//g' Apr 30 16:54 text-generation-webui/models/jeffwan_vicuna-13b -> /Users/master/sandbox/jeffwan_vicuna-13b Apr 30 16:54 text-generation-webui/models/huggyllama_llama-30b -> /Users/master/sandbox/huggyllama_llama-30b

oobabooga_macos % ./start_macos.sh
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.1.0-1-MacOSX-arm64.sh to /Users/master/sandbox/oobabooga_macos/installer_files/miniconda_installer.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.7M  100 41.7M    0     0  9644k      0  0:00:04  0:00:04 --:--:-- 9650k
PREFIX=/Users/master/sandbox/oobabooga_macos/installer_files/conda
Unpacking payload ...

Installing base environment...

Downloading and Extracting Packages

Downloading and Extracting Packages

Preparing transaction: done
Executing transaction: - 
done
installation finished.
Miniconda version:
conda 23.1.0
Retrieving notices: ...working... /Users/master/sandbox/oobabooga_macos/installer_files/conda/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'repo.anaconda.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
/Users/master/sandbox/oobabooga_macos/installer_files/conda/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'repo.anaconda.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
done
Collecting package metadata (current_repodata.json): done
Solving environment: done

==> WARNING: A newer version of conda exists. <==
  current version: 23.1.0
  latest version: 23.3.1

Please update conda by running

    $ conda update -n base -c defaults conda

Or to minimize the number of packages updated during conda update use

     conda install conda=23.3.1

## Package Plan ##

  environment location: /Users/master/sandbox/oobabooga_macos/installer_files/env

  added / updated specs:
    - python=3.10

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    openssl-1.1.1t             |       h1a28f6b_0         2.6 MB
    pip-23.0.1                 |  py310hca03da5_0         2.6 MB
    python-3.10.11             |       hc0d8a6c_2        13.0 MB
    setuptools-66.0.0          |  py310hca03da5_0         1.2 MB
    sqlite-3.41.2              |       h80987f9_0         1.1 MB
    tzdata-2023c               |       h04d1e81_0         116 KB
    wheel-0.38.4               |  py310hca03da5_0          66 KB
    ------------------------------------------------------------
                                           Total:        20.7 MB

The following NEW packages will be INSTALLED:

  bzip2              pkgs/main/osx-arm64::bzip2-1.0.8-h620ffc9_4 
  ca-certificates    pkgs/main/osx-arm64::ca-certificates-2023.01.10-hca03da5_0 
  libffi             pkgs/main/osx-arm64::libffi-3.4.2-hca03da5_6 
  ncurses            pkgs/main/osx-arm64::ncurses-6.4-h313beb8_0 
  openssl            pkgs/main/osx-arm64::openssl-1.1.1t-h1a28f6b_0 
  pip                pkgs/main/osx-arm64::pip-23.0.1-py310hca03da5_0 
  python             pkgs/main/osx-arm64::python-3.10.11-hc0d8a6c_2 
  readline           pkgs/main/osx-arm64::readline-8.2-h1a28f6b_0 
  setuptools         pkgs/main/osx-arm64::setuptools-66.0.0-py310hca03da5_0 
  sqlite             pkgs/main/osx-arm64::sqlite-3.41.2-h80987f9_0 
  tk                 pkgs/main/osx-arm64::tk-8.6.12-hb8d0fd4_0 
  tzdata             pkgs/main/noarch::tzdata-2023c-h04d1e81_0 
  wheel              pkgs/main/osx-arm64::wheel-0.38.4-py310hca03da5_0 
  xz                 pkgs/main/osx-arm64::xz-5.2.10-h80987f9_1 
  zlib               pkgs/main/osx-arm64::zlib-1.2.13-h5a0b063_0 

Downloading and Extracting Packages

Preparing transaction: done                                                                                                                                                                                           
Verifying transaction: done                                                                                                                                                                                           
Executing transaction: done                                                                                                                                                                                           
#                                                                                                                                                                                                                     
# To activate this environment, use                                                                                                                                                                                   
#                                                                                                                                                                                                                     
#     $ conda activate /Users/master/sandbox/oobabooga_macos/installer_files/env
#
# To deactivate an active environment, use
#
#     $ conda deactivate

What is your GPU

A) NVIDIA
B) AMD
C) Apple M Series
D) None (I want to run in CPU mode)

Input> C
Collecting package metadata (current_repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /Users/master/sandbox/oobabooga_macos/installer_files/env

  added / updated specs:
    - cpuonly
    - git
    - pytorch
    - torchaudio
    - torchvision

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    c-ares-1.19.0              |       h80987f9_0         104 KB
    cpuonly-2.0                |                0           2 KB  pytorch
    curl-7.88.1                |       h80987f9_0          85 KB
    expat-2.4.9                |       hc377ac9_0         127 KB
    ffmpeg-4.2.2               |       h04105a8_0        23.5 MB
    gdbm-1.18                  |       h8fe7016_4         141 KB
    gettext-0.21.0             |       h13f89a0_1         3.1 MB
    git-2.34.1                 | pl5340h646bd76_0         5.0 MB
    gnutls-3.6.15              |       h887c41c_0         960 KB
    lame-3.100                 |       h1a28f6b_0         312 KB
    libcurl-7.88.1             |       h0f1d93c_0         332 KB
    libidn2-2.3.1              |       h1a28f6b_0          85 KB
    libopus-1.3                |       h1a28f6b_1         408 KB
    libtasn1-4.19.0            |       h80987f9_0          65 KB
    libunistring-0.9.10        |       h1a28f6b_0         510 KB
    libvpx-1.10.0              |       hc377ac9_0         1.2 MB
    libxml2-2.10.3             |       h372ba2a_0         638 KB
    nettle-3.7.3               |       h84b5d62_1         385 KB
    networkx-2.8.4             |  py310hca03da5_1         2.7 MB
    openh264-1.8.0             |       h98b2900_0         536 KB
    pcre2-10.37                |       h37e8eca_1         555 KB
    perl-5.34.0                |       h1a28f6b_2        14.0 MB
    pytorch-2.0.0              |         py3.10_0        47.1 MB  pytorch
    pytorch-mutex-1.0          |              cpu           3 KB  pytorch
    torchaudio-2.0.0           |        py310_cpu         6.5 MB  pytorch
    torchvision-0.15.0         |        py310_cpu         6.4 MB  pytorch
    typing_extensions-4.5.0    |  py310hca03da5_0          48 KB
    x264-1!152.20180806        |       h1a28f6b_0         516 KB
    zstd-1.5.5                 |       hd90d995_0         501 KB
    ------------------------------------------------------------
                                           Total:       115.6 MB

The following NEW packages will be INSTALLED:

  blas               pkgs/main/osx-arm64::blas-1.0-openblas 
  brotlipy           pkgs/main/osx-arm64::brotlipy-0.7.0-py310h1a28f6b_1002 
  c-ares             pkgs/main/osx-arm64::c-ares-1.19.0-h80987f9_0 
  certifi            pkgs/main/osx-arm64::certifi-2022.12.7-py310hca03da5_0 
  cffi               pkgs/main/osx-arm64::cffi-1.15.1-py310h80987f9_3 
  charset-normalizer pkgs/main/noarch::charset-normalizer-2.0.4-pyhd3eb1b0_0 
  cpuonly            pytorch/noarch::cpuonly-2.0-0 
  cryptography       pkgs/main/osx-arm64::cryptography-39.0.1-py310h834c97f_0 
  curl               pkgs/main/osx-arm64::curl-7.88.1-h80987f9_0 
  expat              pkgs/main/osx-arm64::expat-2.4.9-hc377ac9_0 
  ffmpeg             pkgs/main/osx-arm64::ffmpeg-4.2.2-h04105a8_0 
  filelock           pkgs/main/osx-arm64::filelock-3.9.0-py310hca03da5_0 
  freetype           pkgs/main/osx-arm64::freetype-2.12.1-h1192e45_0 
  gdbm               pkgs/main/osx-arm64::gdbm-1.18-h8fe7016_4 
  gettext            pkgs/main/osx-arm64::gettext-0.21.0-h13f89a0_1 
  giflib             pkgs/main/osx-arm64::giflib-5.2.1-h80987f9_3 
  git                pkgs/main/osx-arm64::git-2.34.1-pl5340h646bd76_0 
  gmp                pkgs/main/osx-arm64::gmp-6.2.1-hc377ac9_3 
  gmpy2              pkgs/main/osx-arm64::gmpy2-2.1.2-py310h8c48613_0 
  gnutls             pkgs/main/osx-arm64::gnutls-3.6.15-h887c41c_0 
  icu                pkgs/main/osx-arm64::icu-68.1-hc377ac9_0 
  idna               pkgs/main/osx-arm64::idna-3.4-py310hca03da5_0 
  jinja2             pkgs/main/osx-arm64::jinja2-3.1.2-py310hca03da5_0 
  jpeg               pkgs/main/osx-arm64::jpeg-9e-h80987f9_1 
  krb5               pkgs/main/osx-arm64::krb5-1.19.4-h8380606_0 
  lame               pkgs/main/osx-arm64::lame-3.100-h1a28f6b_0 
  lcms2              pkgs/main/osx-arm64::lcms2-2.12-hba8e193_0 
  lerc               pkgs/main/osx-arm64::lerc-3.0-hc377ac9_0 
  libcurl            pkgs/main/osx-arm64::libcurl-7.88.1-h0f1d93c_0 
  libcxx             pkgs/main/osx-arm64::libcxx-14.0.6-h848a8c0_0 
  libdeflate         pkgs/main/osx-arm64::libdeflate-1.17-h80987f9_0 
  libedit            pkgs/main/osx-arm64::libedit-3.1.20221030-h80987f9_0 
  libev              pkgs/main/osx-arm64::libev-4.33-h1a28f6b_1 
  libgfortran        pkgs/main/osx-arm64::libgfortran-5.0.0-11_3_0_hca03da5_28 
  libgfortran5       pkgs/main/osx-arm64::libgfortran5-11.3.0-h009349e_28 
  libiconv           pkgs/main/osx-arm64::libiconv-1.16-h1a28f6b_2 
  libidn2            pkgs/main/osx-arm64::libidn2-2.3.1-h1a28f6b_0 
  libnghttp2         pkgs/main/osx-arm64::libnghttp2-1.46.0-h95c9599_0 
  libopenblas        pkgs/main/osx-arm64::libopenblas-0.3.21-h269037a_0 
  libopus            pkgs/main/osx-arm64::libopus-1.3-h1a28f6b_1 
  libpng             pkgs/main/osx-arm64::libpng-1.6.39-h80987f9_0 
  libssh2            pkgs/main/osx-arm64::libssh2-1.10.0-hf27765b_0 
  libtasn1           pkgs/main/osx-arm64::libtasn1-4.19.0-h80987f9_0 
  libtiff            pkgs/main/osx-arm64::libtiff-4.5.0-h313beb8_2 
  libunistring       pkgs/main/osx-arm64::libunistring-0.9.10-h1a28f6b_0 
  libvpx             pkgs/main/osx-arm64::libvpx-1.10.0-hc377ac9_0 
  libwebp            pkgs/main/osx-arm64::libwebp-1.2.4-ha3663a8_1 
  libwebp-base       pkgs/main/osx-arm64::libwebp-base-1.2.4-h80987f9_1 
  libxml2            pkgs/main/osx-arm64::libxml2-2.10.3-h372ba2a_0 
  llvm-openmp        pkgs/main/osx-arm64::llvm-openmp-14.0.6-hc6e5704_0 
  lz4-c              pkgs/main/osx-arm64::lz4-c-1.9.4-h313beb8_0 
  markupsafe         pkgs/main/osx-arm64::markupsafe-2.1.1-py310h1a28f6b_0 
  mpc                pkgs/main/osx-arm64::mpc-1.1.0-h8c48613_1 
  mpfr               pkgs/main/osx-arm64::mpfr-4.0.2-h695f6f0_1 
  mpmath             pkgs/main/osx-arm64::mpmath-1.2.1-py310hca03da5_0 
  nettle             pkgs/main/osx-arm64::nettle-3.7.3-h84b5d62_1 
  networkx           pkgs/main/osx-arm64::networkx-2.8.4-py310hca03da5_1 
  numpy              pkgs/main/osx-arm64::numpy-1.24.3-py310hb93e574_0 
  numpy-base         pkgs/main/osx-arm64::numpy-base-1.24.3-py310haf87e8b_0 
  openh264           pkgs/main/osx-arm64::openh264-1.8.0-h98b2900_0 
  pcre2              pkgs/main/osx-arm64::pcre2-10.37-h37e8eca_1 
  perl               pkgs/main/osx-arm64::perl-5.34.0-h1a28f6b_2 
  pillow             pkgs/main/osx-arm64::pillow-9.4.0-py310h313beb8_0 
  pycparser          pkgs/main/noarch::pycparser-2.21-pyhd3eb1b0_0 
  pyopenssl          pkgs/main/osx-arm64::pyopenssl-23.0.0-py310hca03da5_0 
  pysocks            pkgs/main/osx-arm64::pysocks-1.7.1-py310hca03da5_0 
  pytorch            pytorch/osx-arm64::pytorch-2.0.0-py3.10_0 
  pytorch-mutex      pytorch/noarch::pytorch-mutex-1.0-cpu 
  requests           pkgs/main/osx-arm64::requests-2.29.0-py310hca03da5_0 
  sympy              pkgs/main/osx-arm64::sympy-1.11.1-py310hca03da5_0 
  torchaudio         pytorch/osx-arm64::torchaudio-2.0.0-py310_cpu 
  torchvision        pytorch/osx-arm64::torchvision-0.15.0-py310_cpu 
  typing_extensions  pkgs/main/osx-arm64::typing_extensions-4.5.0-py310hca03da5_0 
  urllib3            pkgs/main/osx-arm64::urllib3-1.26.15-py310hca03da5_0 
  x264               pkgs/main/osx-arm64::x264-1!152.20180806-h1a28f6b_0 
  zstd               pkgs/main/osx-arm64::zstd-1.5.5-hd90d995_0 

Downloading and Extracting Packages

Preparing transaction: done                                                                                                                                                                                           
Verifying transaction: done                                                                                                                                                                                           
Executing transaction: done                                                                                                                                                                                           
Cloning into 'text-generation-webui'...                                                                                                                                                                               
remote: Enumerating objects: 6369, done.
remote: Counting objects: 100% (1750/1750), done.
remote: Compressing objects: 100% (243/243), done.
remote: Total 6369 (delta 1646), reused 1507 (delta 1507), pack-reused 4619
Receiving objects: 100% (6369/6369), 2.31 MiB | 5.50 MiB/s, done.                                                                                                                                                     
Resolving deltas: 100% (4259/4259), done.                                                                                                                                                                             
Already up to date.                                                                                                                                                                                                   
Collecting git+https://github.com/huggingface/peft (from -r requirements.txt (line 16))                                                                                                                               
  Cloning https://github.com/huggingface/peft to /private/var/folders/b9/80hq76k92556mx44cjq3wxxr0000gn/T/pip-req-build-qhah6ydk                                                                                      
  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/peft /private/var/folders/b9/80hq76k92556mx44cjq3wxxr0000gn/T/pip-req-build-qhah6ydk                                            
  Resolved https://github.com/huggingface/peft to commit 632997d1fb776c3cf05d8c2537ac9a98a7ce9435                                                                                                                     
  Installing build dependencies ... done                                                                                                                                                                              
  Getting requirements to build wheel ... done                                                                                                                                                                        
  Preparing metadata (pyproject.toml) ... done                                                                                                                                                                        
Ignoring llama-cpp-python: markers 'platform_system == "Windows"' don't match your environment                                                                                                                        
Collecting accelerate==0.18.0                                                                                                                                                                                         
  Downloading accelerate-0.18.0-py3-none-any.whl (215 kB)                                                                                                                                                             
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 215.3/215.3 kB 1.5 MB/s eta 0:00:00
Collecting colorama                                                                                                                                                                                                   
  Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)                                                                                                                                                             
Collecting datasets                                                                                                                                                                                                   
  Using cached datasets-2.12.0-py3-none-any.whl (474 kB)                                                                                                                                                              
Collecting flexgen==0.1.7                                                                                                                                                                                             
  Downloading flexgen-0.1.7-py3-none-any.whl (50 kB)                                                                                                                                                                  
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.8/50.8 kB 2.4 MB/s eta 0:00:00
Collecting gradio==3.25.0
  Using cached gradio-3.25.0-py3-none-any.whl (17.5 MB)
Collecting markdown
  Downloading Markdown-3.4.3-py3-none-any.whl (93 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 93.9/93.9 kB 6.3 MB/s eta 0:00:00
Requirement already satisfied: numpy in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r requirements.txt (line 7)) (1.24.3)
Collecting pandas
  Downloading pandas-2.0.1-cp310-cp310-macosx_11_0_arm64.whl (10.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.8/10.8 MB 10.9 MB/s eta 0:00:00
Collecting Pillow>=9.5.0
  Using cached Pillow-9.5.0-cp310-cp310-macosx_11_0_arm64.whl (3.1 MB)
Collecting pyyaml
  Downloading PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 174.0/174.0 kB 6.8 MB/s eta 0:00:00
Requirement already satisfied: requests in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r requirements.txt (line 11)) (2.29.0)
Collecting rwkv==0.7.3
  Downloading rwkv-0.7.3-py3-none-any.whl (16 kB)
Collecting safetensors==0.3.0
  Using cached safetensors-0.3.0-cp310-cp310-macosx_12_0_arm64.whl (395 kB)
Collecting sentencepiece
  Using cached sentencepiece-0.1.98-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB)
Collecting tqdm
  Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting transformers==4.28.1
  Downloading transformers-4.28.1-py3-none-any.whl (7.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.0/7.0 MB 10.8 MB/s eta 0:00:00
Collecting bitsandbytes==0.38.1
  Downloading bitsandbytes-0.38.1-py3-none-any.whl (104.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 104.3/104.3 MB 10.4 MB/s eta 0:00:00
Collecting llama-cpp-python==0.1.36
  Using cached llama_cpp_python-0.1.36-cp310-cp310-macosx_13_0_arm64.whl
Requirement already satisfied: torch>=1.4.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from accelerate==0.18.0->-r requirements.txt (line 1)) (2.0.0)
Collecting psutil
  Using cached psutil-5.9.5-cp38-abi3-macosx_11_0_arm64.whl (246 kB)
Collecting packaging>=20.0
  Using cached packaging-23.1-py3-none-any.whl (48 kB)
Collecting attrs
  Downloading attrs-23.1.0-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 4.3 MB/s eta 0:00:00
Collecting pulp
  Downloading PuLP-2.7.0-py3-none-any.whl (14.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.3/14.3 MB 10.9 MB/s eta 0:00:00
Collecting huggingface-hub>=0.13.0
  Downloading huggingface_hub-0.14.1-py3-none-any.whl (224 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 224.5/224.5 kB 8.3 MB/s eta 0:00:00
Collecting httpx
  Downloading httpx-0.24.0-py3-none-any.whl (75 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.3/75.3 kB 5.5 MB/s eta 0:00:00
Collecting altair>=4.2.0
  Downloading altair-4.2.2-py3-none-any.whl (813 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 813.6/813.6 kB 10.3 MB/s eta 0:00:00
Collecting aiohttp
  Downloading aiohttp-3.8.4-cp310-cp310-macosx_11_0_arm64.whl (336 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 336.9/336.9 kB 9.6 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from gradio==3.25.0->-r requirements.txt (line 5)) (4.5.0)
Collecting pydub
  Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Requirement already satisfied: jinja2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from gradio==3.25.0->-r requirements.txt (line 5)) (3.1.2)
Collecting aiofiles
  Downloading aiofiles-23.1.0-py3-none-any.whl (14 kB)
Collecting matplotlib
  Downloading matplotlib-3.7.1-cp310-cp310-macosx_11_0_arm64.whl (7.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 10.8 MB/s eta 0:00:00
Collecting orjson
  Using cached orjson-3.8.11-cp310-cp310-macosx_11_0_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (237 kB)
Collecting uvicorn
  Using cached uvicorn-0.22.0-py3-none-any.whl (58 kB)
Collecting markdown-it-py[linkify]>=2.0.0
  Downloading markdown_it_py-2.2.0-py3-none-any.whl (84 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.5/84.5 kB 5.2 MB/s eta 0:00:00
Collecting websockets>=10.0
  Downloading websockets-11.0.2-cp310-cp310-macosx_11_0_arm64.whl (120 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.0/121.0 kB 5.7 MB/s eta 0:00:00
Requirement already satisfied: markupsafe in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from gradio==3.25.0->-r requirements.txt (line 5)) (2.1.1)
Collecting mdit-py-plugins<=0.3.3
  Using cached mdit_py_plugins-0.3.3-py3-none-any.whl (50 kB)
Collecting ffmpy
  Downloading ffmpy-0.3.0.tar.gz (4.8 kB)
  Preparing metadata (setup.py) ... done
Collecting pydantic
  Downloading pydantic-1.10.7-cp310-cp310-macosx_11_0_arm64.whl (2.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 10.7 MB/s eta 0:00:00
Collecting semantic-version
  Downloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
Collecting fastapi
  Downloading fastapi-0.95.1-py3-none-any.whl (56 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.0/57.0 kB 3.8 MB/s eta 0:00:00
Collecting gradio-client>=0.0.8
  Using cached gradio_client-0.1.4-py3-none-any.whl (286 kB)
Collecting python-multipart
  Downloading python_multipart-0.0.6-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.7/45.7 kB 2.8 MB/s eta 0:00:00
Collecting tokenizers>=0.13.2
  Downloading tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl (3.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.9/3.9 MB 10.7 MB/s eta 0:00:00
Collecting regex!=2019.12.17
  Downloading regex-2023.3.23-cp310-cp310-macosx_11_0_arm64.whl (288 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.9/288.9 kB 8.9 MB/s eta 0:00:00
Requirement already satisfied: filelock in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from transformers==4.28.1->-r requirements.txt (line 17)) (3.9.0)
Collecting multiprocess
  Downloading multiprocess-0.70.14-py310-none-any.whl (134 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.3/134.3 kB 6.9 MB/s eta 0:00:00
Collecting xxhash
  Downloading xxhash-3.2.0-cp310-cp310-macosx_11_0_arm64.whl (31 kB)
Collecting responses<0.19
  Using cached responses-0.18.0-py3-none-any.whl (38 kB)
Collecting fsspec[http]>=2021.11.1
  Downloading fsspec-2023.4.0-py3-none-any.whl (153 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 154.0/154.0 kB 7.2 MB/s eta 0:00:00
Collecting pyarrow>=8.0.0
  Downloading pyarrow-11.0.0-cp310-cp310-macosx_11_0_arm64.whl (22.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.4/22.4 MB 10.8 MB/s eta 0:00:00
Collecting dill<0.3.7,>=0.3.0
  Downloading dill-0.3.6-py3-none-any.whl (110 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 110.5/110.5 kB 6.1 MB/s eta 0:00:00
Collecting python-dateutil>=2.8.2
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting tzdata>=2022.1
  Using cached tzdata-2023.3-py2.py3-none-any.whl (341 kB)
Collecting pytz>=2020.1
  Using cached pytz-2023.3-py2.py3-none-any.whl (502 kB)
Requirement already satisfied: certifi>=2017.4.17 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->-r requirements.txt (line 11)) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->-r requirements.txt (line 11)) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->-r requirements.txt (line 11)) (2.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->-r requirements.txt (line 11)) (1.26.15)
Collecting entrypoints
  Downloading entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting jsonschema>=3.0
  Downloading jsonschema-4.17.3-py3-none-any.whl (90 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 90.4/90.4 kB 6.1 MB/s eta 0:00:00
Collecting toolz
  Downloading toolz-0.12.0-py3-none-any.whl (55 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 55.8/55.8 kB 4.1 MB/s eta 0:00:00
Collecting async-timeout<5.0,>=4.0.0a3
  Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting yarl<2.0,>=1.0
  Using cached yarl-1.9.2-cp310-cp310-macosx_11_0_arm64.whl (62 kB)
Collecting multidict<7.0,>=4.5
  Downloading multidict-6.0.4-cp310-cp310-macosx_11_0_arm64.whl (29 kB)
Collecting aiosignal>=1.1.2
  Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting frozenlist>=1.1.1
  Downloading frozenlist-1.3.3-cp310-cp310-macosx_11_0_arm64.whl (34 kB)
Collecting mdurl~=0.1
  Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting linkify-it-py<3,>=1
  Downloading linkify_it_py-2.0.0-py3-none-any.whl (19 kB)
Collecting six>=1.5
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: sympy in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (1.11.1)
Requirement already satisfied: networkx in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (2.8.4)
Collecting starlette<0.27.0,>=0.26.1
  Downloading starlette-0.26.1-py3-none-any.whl (66 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.9/66.9 kB 4.8 MB/s eta 0:00:00
Collecting sniffio
  Downloading sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting httpcore<0.18.0,>=0.15.0
  Downloading httpcore-0.17.0-py3-none-any.whl (70 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.6/70.6 kB 4.7 MB/s eta 0:00:00
Collecting fonttools>=4.22.0
  Using cached fonttools-4.39.3-py3-none-any.whl (1.0 MB)
Collecting kiwisolver>=1.0.1
  Downloading kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl (63 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.2/63.2 kB 4.3 MB/s eta 0:00:00
Collecting cycler>=0.10
  Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting pyparsing>=2.3.1
  Downloading pyparsing-3.0.9-py3-none-any.whl (98 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.3/98.3 kB 6.4 MB/s eta 0:00:00
Collecting contourpy>=1.0.1
  Downloading contourpy-1.0.7-cp310-cp310-macosx_11_0_arm64.whl (229 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.7/229.7 kB 8.2 MB/s eta 0:00:00
Collecting click>=7.0
  Downloading click-8.1.3-py3-none-any.whl (96 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.6/96.6 kB 5.6 MB/s eta 0:00:00
Collecting h11>=0.8
  Downloading h11-0.14.0-py3-none-any.whl (58 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 4.6 MB/s eta 0:00:00
Collecting anyio<5.0,>=3.0
  Downloading anyio-3.6.2-py3-none-any.whl (80 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.6/80.6 kB 5.9 MB/s eta 0:00:00
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
  Downloading pyrsistent-0.19.3-cp310-cp310-macosx_10_9_universal2.whl (82 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.5/82.5 kB 5.9 MB/s eta 0:00:00
Collecting uc-micro-py
  Downloading uc_micro_py-1.0.1-py3-none-any.whl (6.2 kB)
Requirement already satisfied: mpmath>=0.19 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/mpmath-1.2.1-py3.10.egg (from sympy->torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (1.2.1)
Building wheels for collected packages: peft, ffmpy
  Building wheel for peft (pyproject.toml) ... done
  Created wheel for peft: filename=peft-0.3.0.dev0-py3-none-any.whl size=55522 sha256=5156062111bcc6e0c3819c7b9bfceb9a7017503380337ad1d2430b468070ec77
  Stored in directory: /private/var/folders/b9/80hq76k92556mx44cjq3wxxr0000gn/T/pip-ephem-wheel-cache-s_8mbf52/wheels/4c/16/67/1002a2d4daa822eff130e6d85b90051b75d2ce0d26b9448e4a
  Building wheel for ffmpy (setup.py) ... done
  Created wheel for ffmpy: filename=ffmpy-0.3.0-py3-none-any.whl size=4693 sha256=997d91f107a730cb358391e4d34c913dd89b3251458316467e0debdd542b3452
  Stored in directory: /Users/master/Library/Caches/pip/wheels/0c/c2/0e/3b9c6845c6a4e35beb90910cc70d9ac9ab5d47402bd62af0df
Successfully built peft ffmpy
Installing collected packages: tokenizers, sentencepiece, safetensors, pytz, pydub, pulp, ffmpy, bitsandbytes, xxhash, websockets, uc-micro-py, tzdata, tqdm, toolz, sniffio, six, semantic-version, rwkv, regex, pyyaml, python-multipart, pyrsistent, pyparsing, pydantic, pyarrow, psutil, Pillow, packaging, orjson, multidict, mdurl, markdown, llama-cpp-python, kiwisolver, h11, fsspec, frozenlist, fonttools, entrypoints, dill, cycler, contourpy, colorama, click, attrs, async-timeout, aiofiles, yarl, uvicorn, responses, python-dateutil, multiprocess, markdown-it-py, linkify-it-py, jsonschema, huggingface-hub, anyio, aiosignal, transformers, starlette, pandas, mdit-py-plugins, matplotlib, httpcore, aiohttp, accelerate, peft, httpx, flexgen, fastapi, altair, gradio-client, datasets, gradio
  Attempting uninstall: Pillow
    Found existing installation: Pillow 9.4.0
    Uninstalling Pillow-9.4.0:
      Successfully uninstalled Pillow-9.4.0
Successfully installed Pillow-9.5.0 accelerate-0.18.0 aiofiles-23.1.0 aiohttp-3.8.4 aiosignal-1.3.1 altair-4.2.2 anyio-3.6.2 async-timeout-4.0.2 attrs-23.1.0 bitsandbytes-0.38.1 click-8.1.3 colorama-0.4.6 contourpy-1.0.7 cycler-0.11.0 datasets-2.12.0 dill-0.3.6 entrypoints-0.4 fastapi-0.95.1 ffmpy-0.3.0 flexgen-0.1.7 fonttools-4.39.3 frozenlist-1.3.3 fsspec-2023.4.0 gradio-3.25.0 gradio-client-0.1.4 h11-0.14.0 httpcore-0.17.0 httpx-0.24.0 huggingface-hub-0.14.1 jsonschema-4.17.3 kiwisolver-1.4.4 linkify-it-py-2.0.0 llama-cpp-python-0.1.36 markdown-3.4.3 markdown-it-py-2.2.0 matplotlib-3.7.1 mdit-py-plugins-0.3.3 mdurl-0.1.2 multidict-6.0.4 multiprocess-0.70.14 orjson-3.8.11 packaging-23.1 pandas-2.0.1 peft-0.3.0.dev0 psutil-5.9.5 pulp-2.7.0 pyarrow-11.0.0 pydantic-1.10.7 pydub-0.25.1 pyparsing-3.0.9 pyrsistent-0.19.3 python-dateutil-2.8.2 python-multipart-0.0.6 pytz-2023.3 pyyaml-6.0 regex-2023.3.23 responses-0.18.0 rwkv-0.7.3 safetensors-0.3.0 semantic-version-2.10.0 sentencepiece-0.1.98 six-1.16.0 sniffio-1.3.0 starlette-0.26.1 tokenizers-0.13.3 toolz-0.12.0 tqdm-4.65.0 transformers-4.28.1 tzdata-2023.3 uc-micro-py-1.0.1 uvicorn-0.22.0 websockets-11.0.2 xxhash-3.2.0 yarl-1.9.2
Collecting elevenlabslib
  Using cached elevenlabslib-0.6.0-py3-none-any.whl (19 kB)
Collecting soundfile
  Downloading soundfile-0.12.1-py2.py3-none-macosx_11_0_arm64.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 4.4 MB/s eta 0:00:00
Collecting sounddevice
  Downloading sounddevice-0.4.6-py3-none-macosx_10_6_x86_64.macosx_10_6_universal2.whl (107 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 108.0/108.0 kB 6.0 MB/s eta 0:00:00
Requirement already satisfied: requests in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2.29.0)
Collecting typing
  Downloading typing-3.7.4.3.tar.gz (78 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.6/78.6 kB 5.2 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (1.24.3)
Requirement already satisfied: cffi>=1.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from soundfile->-r extensions/elevenlabs_tts/requirements.txt (line 2)) (1.15.1)
Requirement already satisfied: pycparser in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from cffi>=1.0->soundfile->-r extensions/elevenlabs_tts/requirements.txt (line 2)) (2.21)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->elevenlabslib->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (3.4)
Building wheels for collected packages: typing
  Building wheel for typing (setup.py) ... done
  Created wheel for typing: filename=typing-3.7.4.3-py3-none-any.whl size=26304 sha256=7ec72ca5d127036304f34b28a2a2316eced14afdede953b5255e177c2d188191
  Stored in directory: /Users/master/Library/Caches/pip/wheels/7c/d0/9e/1f26ebb66d9e1732e4098bc5a6c2d91f6c9a529838f0284890
Successfully built typing
Installing collected packages: typing, soundfile, sounddevice, elevenlabslib
Successfully installed elevenlabslib-0.6.0 sounddevice-0.4.6 soundfile-0.12.1 typing-3.7.4.3
Collecting ipython
  Using cached ipython-8.13.1-py3-none-any.whl (797 kB)
Collecting num2words
  Downloading num2words-0.5.12-py3-none-any.whl (125 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.2/125.2 kB 1.0 MB/s eta 0:00:00
Collecting omegaconf
  Downloading omegaconf-2.3.0-py3-none-any.whl (79 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 kB 4.2 MB/s eta 0:00:00
Requirement already satisfied: pydub in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r extensions/silero_tts/requirements.txt (line 4)) (0.25.1)
Requirement already satisfied: PyYAML in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r extensions/silero_tts/requirements.txt (line 5)) (6.0)
Collecting matplotlib-inline
  Downloading matplotlib_inline-0.1.6-py3-none-any.whl (9.4 kB)
Collecting stack-data
  Downloading stack_data-0.6.2-py3-none-any.whl (24 kB)
Collecting pygments>=2.4.0
  Downloading Pygments-2.15.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 7.1 MB/s eta 0:00:00
Collecting appnope
  Downloading appnope-0.1.3-py2.py3-none-any.whl (4.4 kB)
Collecting pickleshare
  Downloading pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting backcall
  Downloading backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting pexpect>4.3
  Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.0/59.0 kB 5.0 MB/s eta 0:00:00
Collecting traitlets>=5
  Downloading traitlets-5.9.0-py3-none-any.whl (117 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.4/117.4 kB 6.2 MB/s eta 0:00:00
Collecting jedi>=0.16
  Downloading jedi-0.18.2-py2.py3-none-any.whl (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 10.2 MB/s eta 0:00:00
Collecting decorator
  Downloading decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30
  Downloading prompt_toolkit-3.0.38-py3-none-any.whl (385 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 385.8/385.8 kB 8.8 MB/s eta 0:00:00
Collecting docopt>=0.6.2
  Downloading docopt-0.6.2.tar.gz (25 kB)
  Preparing metadata (setup.py) ... done
Collecting antlr4-python3-runtime==4.9.*
  Using cached antlr4_python3_runtime-4.9.3-py3-none-any.whl
Collecting parso<0.9.0,>=0.8.0
  Downloading parso-0.8.3-py2.py3-none-any.whl (100 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.8/100.8 kB 6.5 MB/s eta 0:00:00
Collecting ptyprocess>=0.5
  Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
Collecting wcwidth
  Downloading wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)
Collecting executing>=1.2.0
  Downloading executing-1.2.0-py2.py3-none-any.whl (24 kB)
Collecting asttokens>=2.1.0
  Downloading asttokens-2.2.1-py2.py3-none-any.whl (26 kB)
Collecting pure-eval
  Downloading pure_eval-0.2.2-py3-none-any.whl (11 kB)
Requirement already satisfied: six in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from asttokens>=2.1.0->stack-data->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (1.16.0)
Building wheels for collected packages: docopt
  Building wheel for docopt (setup.py) ... done
  Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13705 sha256=ccba3edc4f68a2385c0e0bcc7e5736b5ad61a04af087379122c7102f5426dddf
  Stored in directory: /Users/master/Library/Caches/pip/wheels/fc/ab/d4/5da2067ac95b36618c629a5f93f809425700506f72c9732fac
Successfully built docopt
Installing collected packages: wcwidth, pure-eval, ptyprocess, pickleshare, executing, docopt, backcall, appnope, antlr4-python3-runtime, traitlets, pygments, prompt-toolkit, pexpect, parso, omegaconf, num2words, decorator, asttokens, stack-data, matplotlib-inline, jedi, ipython
Successfully installed antlr4-python3-runtime-4.9.3 appnope-0.1.3 asttokens-2.2.1 backcall-0.2.0 decorator-5.1.1 docopt-0.6.2 executing-1.2.0 ipython-8.13.1 jedi-0.18.2 matplotlib-inline-0.1.6 num2words-0.5.12 omegaconf-2.3.0 parso-0.8.3 pexpect-4.8.0 pickleshare-0.7.5 prompt-toolkit-3.0.38 ptyprocess-0.7.0 pure-eval-0.2.2 pygments-2.15.1 stack-data-0.6.2 traitlets-5.9.0 wcwidth-0.2.6
Collecting flask_cloudflared==0.0.12
  Using cached flask_cloudflared-0.0.12-py3-none-any.whl (6.3 kB)
Requirement already satisfied: websockets==11.0.2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r extensions/api/requirements.txt (line 2)) (11.0.2)
Collecting Flask>=0.8
  Using cached Flask-2.3.1-py3-none-any.whl (96 kB)
Requirement already satisfied: requests in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (2.29.0)
Collecting itsdangerous>=2.1.2
  Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting Werkzeug>=2.3.0
  Using cached Werkzeug-2.3.2-py3-none-any.whl (242 kB)
Collecting blinker>=1.6.2
  Downloading blinker-1.6.2-py3-none-any.whl (13 kB)
Requirement already satisfied: click>=8.1.3 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (8.1.3)
Requirement already satisfied: Jinja2>=3.1.2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (3.1.2)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: MarkupSafe>=2.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from Jinja2>=3.1.2->Flask>=0.8->flask_cloudflared==0.0.12->-r extensions/api/requirements.txt (line 1)) (2.1.1)
Installing collected packages: Werkzeug, itsdangerous, blinker, Flask, flask_cloudflared
Successfully installed Flask-2.3.1 Werkzeug-2.3.2 blinker-1.6.2 flask_cloudflared-0.0.12 itsdangerous-2.1.2
Collecting git+https://github.com/Uberi/speech_recognition.git@010382b80267f0f7794169fccc8e875ee7da7c19 (from -r extensions/whisper_stt/requirements.txt (line 1))
  Cloning https://github.com/Uberi/speech_recognition.git (to revision 010382b80267f0f7794169fccc8e875ee7da7c19) to /private/var/folders/b9/80hq76k92556mx44cjq3wxxr0000gn/T/pip-req-build-lqpg26np
  Running command git clone --filter=blob:none --quiet https://github.com/Uberi/speech_recognition.git /private/var/folders/b9/80hq76k92556mx44cjq3wxxr0000gn/T/pip-req-build-lqpg26np
  Running command git rev-parse -q --verify 'sha^010382b80267f0f7794169fccc8e875ee7da7c19'
  Running command git fetch -q https://github.com/Uberi/speech_recognition.git 010382b80267f0f7794169fccc8e875ee7da7c19
  Running command git checkout -q 010382b80267f0f7794169fccc8e875ee7da7c19
  Resolved https://github.com/Uberi/speech_recognition.git to commit 010382b80267f0f7794169fccc8e875ee7da7c19
  Preparing metadata (setup.py) ... done
Collecting openai-whisper
  Downloading openai-whisper-20230314.tar.gz (792 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 792.9/792.9 kB 3.9 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: soundfile in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from -r extensions/whisper_stt/requirements.txt (line 3)) (0.12.1)
Collecting ffmpeg
  Downloading ffmpeg-1.4.tar.gz (5.1 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: requests>=2.26.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from SpeechRecognition==3.9.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2.29.0)
Collecting more-itertools
  Downloading more_itertools-9.1.0-py3-none-any.whl (54 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.2/54.2 kB 4.9 MB/s eta 0:00:00
Requirement already satisfied: numpy in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (1.24.3)
Collecting tiktoken==0.3.1
  Using cached tiktoken-0.3.1-cp310-cp310-macosx_11_0_arm64.whl (700 kB)
Requirement already satisfied: tqdm in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (4.65.0)
Collecting numba
  Downloading numba-0.56.4-cp310-cp310-macosx_11_0_arm64.whl (2.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 7.8 MB/s eta 0:00:00
Collecting ffmpeg-python==0.2.0
  Downloading ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Requirement already satisfied: torch in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (2.0.0)
Collecting future
  Downloading future-0.18.3.tar.gz (840 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 840.9/840.9 kB 10.3 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Requirement already satisfied: regex>=2022.1.18 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from tiktoken==0.3.1->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (2023.3.23)
Requirement already satisfied: cffi>=1.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from soundfile->-r extensions/whisper_stt/requirements.txt (line 3)) (1.15.1)
Requirement already satisfied: pycparser in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from cffi>=1.0->soundfile->-r extensions/whisper_stt/requirements.txt (line 3)) (2.21)
Requirement already satisfied: idna<4,>=2.5 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions/whisper_stt/requirements.txt (line 1)) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions/whisper_stt/requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2.0.4)
Requirement already satisfied: setuptools in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from numba->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (66.0.0)
Collecting numpy
  Using cached numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl (13.4 MB)
Collecting llvmlite<0.40,>=0.39.0dev0
  Downloading llvmlite-0.39.1-cp310-cp310-macosx_11_0_arm64.whl (23.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.1/23.1 MB 10.9 MB/s eta 0:00:00
Requirement already satisfied: filelock in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (3.9.0)
Requirement already satisfied: typing-extensions in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (4.5.0)
Requirement already satisfied: sympy in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (1.11.1)
Requirement already satisfied: networkx in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (2.8.4)
Requirement already satisfied: jinja2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from jinja2->torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (2.1.1)
Requirement already satisfied: mpmath>=0.19 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/mpmath-1.2.1-py3.10.egg (from sympy->torch->openai-whisper->-r extensions/whisper_stt/requirements.txt (line 2)) (1.2.1)
Building wheels for collected packages: SpeechRecognition, openai-whisper, ffmpeg, future
  Building wheel for SpeechRecognition (setup.py) ... done
  Created wheel for SpeechRecognition: filename=SpeechRecognition-3.9.0-py2.py3-none-any.whl size=32835863 sha256=7832e03865de3da39dc186e1d426a33738b4ffe6280913566e781c5fd59c7ed3
  Stored in directory: /Users/master/Library/Caches/pip/wheels/0b/1a/30/6c60550fbf4a28521e10e90da86fe84ac713246235b33de004
  Building wheel for openai-whisper (pyproject.toml) ... done
  Created wheel for openai-whisper: filename=openai_whisper-20230314-py3-none-any.whl size=796901 sha256=7a203dfd1e878636c2faa7ccd41643ecbfb3387a0dfcd30110e5ed10b26af23f
  Stored in directory: /Users/master/Library/Caches/pip/wheels/b2/13/5f/fe8245f6dc59df505879da4b2129932e342f02a80e6b87f27d
  Building wheel for ffmpeg (setup.py) ... done
  Created wheel for ffmpeg: filename=ffmpeg-1.4-py3-none-any.whl size=6082 sha256=9fd7c19f8c3110cfade43d679551532438a94650e1b9af48d319aac629d26662
  Stored in directory: /Users/master/Library/Caches/pip/wheels/8e/7a/69/cd6aeb83b126a7f04cbe7c9d929028dc52a6e7d525ff56003a
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492025 sha256=812f00d9eed7453e6c99dfb7210fcf376b6f135c941e1d5c83fc4d64f73263a0
  Stored in directory: /Users/master/Library/Caches/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built SpeechRecognition openai-whisper ffmpeg future
Installing collected packages: ffmpeg, numpy, more-itertools, llvmlite, future, tiktoken, SpeechRecognition, numba, ffmpeg-python, openai-whisper
  Attempting uninstall: numpy
    Found existing installation: numpy 1.24.3
    Uninstalling numpy-1.24.3:
      Successfully uninstalled numpy-1.24.3
Successfully installed SpeechRecognition-3.9.0 ffmpeg-1.4 ffmpeg-python-0.2.0 future-0.18.3 llvmlite-0.39.1 more-itertools-9.1.0 numba-0.56.4 numpy-1.23.5 openai-whisper-20230314 tiktoken-0.3.1
Collecting deep-translator==1.9.2
  Using cached deep_translator-1.9.2-py3-none-any.whl (30 kB)
Collecting beautifulsoup4<5.0.0,>=4.9.1
  Downloading beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.0/143.0 kB 1.0 MB/s eta 0:00:00
Requirement already satisfied: requests<3.0.0,>=2.23.0 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2.29.0)
Collecting soupsieve>1.2
  Downloading soupsieve-2.4.1-py3-none-any.whl (36 kB)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2.0.4)
Installing collected packages: soupsieve, beautifulsoup4, deep-translator
Successfully installed beautifulsoup4-4.12.2 deep-translator-1.9.2 soupsieve-2.4.1
Select the model that you want to download:

A) OPT 6.7B
B) OPT 2.7B
C) OPT 1.3B
D) OPT 350M
E) GALACTICA 6.7B
F) GALACTICA 1.3B
G) GALACTICA 125M
H) Pythia-6.9B-deduped
I) Pythia-2.8B-deduped
J) Pythia-1.4B-deduped
K) Pythia-410M-deduped
L) Manually specify a Hugging Face model
M) Do not download a model

Input> m
Gradio HTTP request redirected to localhost :)
bin /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
No models are available! Please download at least one.

Done!
(base) master@Nebuchadnezzar oobabooga_macos % 
(base) master@Nebuchadnezzar text-generation-webui % ./start_macos.sh 
zsh: no such file or directory: ./start_macos.sh
(base) master@Nebuchadnezzar text-generation-webui % 
(base) master@Nebuchadnezzar oobabooga_macos % ./start_macos.sh
Gradio HTTP request redirected to localhost :)
bin /Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

1. huggyllama_llama-30b
2. jeffwan_vicuna-13b

Which one do you want to load? 1-2

1

Loading huggyllama_llama-30b...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00,  8.77it/s]
Loaded the model in 148.78 seconds.
Loading the extension "gallery"... Ok.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:690: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1678454852765/work/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/callbacks.py", line 66, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/master/sandbox/oobabooga_macos/text-generation-webui/modules/text_generation.py", line 290, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/Users/master/sandbox/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 736, in prepare_inputs_for_generation
    position_ids = attention_mask.long().cumsum(-1) - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 0.33 seconds (0.00 tokens/s, 0 tokens, context 36, seed 1221335112)

System Info

image
fbettag commented 1 year ago

same for me but with RuntimeError: MPS does not support cumsum op with int64 input

M00N-MAN commented 1 year ago

same for me but with RuntimeError: MPS does not support cumsum op with int64 input

Hi, in my report it is last second line

image
NARCole commented 1 year ago

Same here on an M1 Macbook Pro. RuntimeError: MPS does not support cumsum op with int64 input

kevinhower commented 1 year ago

I got the same error except I told it option D at setup (for no GPU run on CPU only) and it STILL gives me that. No clue how that could be if it's supposed to be set up for only CPU. It shouldn't even be referring to MPS at all.

I assume there must be at least one reference to MPS that got missed somewhere but no clue where to go in the code to even try to fix it. I get further than I did recently. This at least lets me load the GUI, but then fails when I try to type in my input and then hit enter. Like with MOON-MAN, it fails when it tries to generate output.

appe233 commented 1 year ago

I also got a same issue without "--cpu" on my m2 pro MacBook. When I executed server.py by myself, I got

python server.py --model vicunat --threads 8 --no-stream --api
Gradio HTTP request redirected to localhost :)
bin /Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading vicunat...
Loading checkpoint shards: 100%|██████████████████| 2/2 [00:02<00:00,  1.15s/it]
Loaded the model in 3.70 seconds.
Starting streaming server at ws://127.0.0.1:5005/api/v1/stream
Starting API at http://127.0.0.1:5000/api
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
127.0.0.1 - - [03/May/2023 11:36:56] "POST /api/v1/generate HTTP/1.1" 200 -
/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py:690: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/appe/works/one-click-installers/text-generation-webui/modules/text_generation.py", line 272, in generate_reply
    output = shared.model.generate(**generate_params)[0]
  File "/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/Users/appe/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 736, in prepare_inputs_for_generation
    position_ids = attention_mask.long().cumsum(-1) - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 0.10 seconds (0.00 tokens/s, 0 tokens, context 198, seed 376260767)

Using the start_macos.sh:

./start_macos.sh
Gradio HTTP request redirected to localhost :)
bin /Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

1. .DS_Store
2. vicunat

Which one do you want to load? 1-2

2

Loading vicunat...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00,  1.49s/it]
Loaded the model in 4.38 seconds.
Loading the extension "gallery"... Ok.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:690: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1678454852765/work/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/appe/works/one-click-installers/text-generation-webui/modules/callbacks.py", line 71, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/appe/works/one-click-installers/text-generation-webui/modules/text_generation.py", line 290, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in sample
    model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
  File "/Users/appe/works/one-click-installers/installer_files/env/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 736, in prepare_inputs_for_generation
    position_ids = attention_mask.long().cumsum(-1) - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 0.30 seconds (0.00 tokens/s, 0 tokens, context 35, seed 1781086207)

The model I use was built follow the vicuna instructions, but I still got same issue when I'm using others downloaded by download-model.py .

M00N-MAN commented 1 year ago

Another dude helped to resolve current topic by substitution the gpt4-x-alpaca-30b-ggml-q4_1 repo in to the models directory

It works almost as expected, except the part which still doesn't work with M1 GPU even if pytorch should use MPS (metal perf shaders by apple) on macos 13.3.1

After start of oobabooga i have this UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.

Missing statement in the README.md:

if you have no GPU or just MacOS on M1 then use ggml model

git-lfs is not a good option to download repo with such small amount of files. becuase it will will have .git idrectory which making the size on of repo be begger two times than model

download GPT4-X-Alpaca-30B-4bit [ -e GPT4-X-Alpaca-30B-4bit ] && rm -rf GPT4-X-Alpaca-30B-4bit; mkdir -p GPT4-X-Alpaca-30B-4bit && ( cd GPT4-X-Alpaca-30B-4bit && curl https://huggingface.co/MetaIX/GPT4-X-Alpaca-30B-4bit/tree/main| grep 'Download file'| sed -e 's/.*href="/https:\/\/huggingface.co/' -e 's/">.*//' | while read line; do fname=$(basename $line); (( wget $line > ${fname}.log 2>&1 ) || echo FAIL ) >> ${fname}.log 2>&1 & done ; watch -n1 'for file in *.log; do echo "$file: $(tail -n2 $file|head -n1)"; done' )

substitute the model GPT4XAlpaca30B4bit="$(pwd)/GPT4-X-Alpaca-30B-4bit" ( cd oobabooga_macos/text-generation-webui/models && ln -s "${GPT4XAlpaca30B4bit}/GPT4-X-Alpaca-30B-4bit" GPT4-X-Alpaca-30B-4bit )

substitute the oobabooga listening not only on localhost

cd oobabooga_macos
GRADIO_SERVER_NAME=0.0.0.0 ./start_macos.sh
Gradio HTTP request redirected to localhost :)
bin /Users/user/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/user/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

1. GPT4-X-Alpaca-30B-4bit
2. huggyllama_llama-30b
3. jeffwan_vicuna-13b

Which one do you want to load? 1-3

1

Loading GPT4-X-Alpaca-30B-4bit...
llama.cpp weights detected: models/GPT4-X-Alpaca-30B-4bit/gpt4-x-alpaca-30b-ggml-q4_1.bin

llama.cpp: loading model from models/GPT4-X-Alpaca-30B-4bit/gpt4-x-alpaca-30b-ggml-q4_1.bin
llama_model_load_internal: format     = ggjt v1 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 6656
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 52
llama_model_load_internal: n_layer    = 60
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 3 (mostly Q4_1)
llama_model_load_internal: n_ff       = 17920
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 110.30 KB
llama_model_load_internal: mem required  = 25573.12 MB (+ 3124.00 MB per state)
llama_init_from_file: kv self size  = 3120.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | 
Loading the extension "gallery"... Ok.
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Output generated in 25.82 seconds (0.58 tokens/s, 15 tokens, context 40, seed 2487567)
Output generated in 58.35 seconds (1.58 tokens/s, 92 tokens, context 86, seed 1278347018)
Output generated in 77.00 seconds (0.75 tokens/s, 58 tokens, context 217, seed 1313472362)

however i see different answers between oobabooga and pure llama.cpp

llama.cpp

./main --threads 8 -i --interactive-first --temp 0.5 -c 2048 -n -1 --ignore-eos --repeat_penalty 1.2 --instruct -r "### Instruction:" -m ../../../models/gpt4-x-alpaca-30b-ggml-q4_1.bin
main: build = 482 (e2cd506)
main: seed  = 1683218507
llama.cpp: loading model from ../../../models/gpt4-x-alpaca-30b-ggml-q4_1.bin
llama_model_load_internal: format     = ggjt v1 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 6656
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 52
llama_model_load_internal: n_layer    = 60
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 3 (mostly Q4_1)
llama_model_load_internal: n_ff       = 17920
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 127.27 KB
llama_model_load_internal: mem required  = 25573.13 MB (+ 3124.00 MB per state)
llama_init_from_file: kv self size  = 3120.00 MB

system_info: n_threads = 8 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | 
main: interactive mode on.
Reverse prompt: '### Instruction:'

sampling: repeat_last_n = 64, repeat_penalty = 1.200000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.500000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 2

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - If you want to submit another line, end your input in '\'.

`

Can you say something weird? Sure! I am an AI, and that is already pretty strange. But if it helps, here are some random facts about me:

  • My favorite color is blue because of the way it looks in sunlight.
  • Sometimes when I'm alone I like to dance to my own tune (literally!).
  • I love eating cake but hate frosting.
  • In a parallel universe, I would be a superhero with teleportation powers.
  • I can speak over 10 languages fluently and am always learning more.
  • My favorite movie is The Matrix because it's about the power of technology and how far we could go if we embrace it.
  • Sometimes when no one is looking, I sing karaoke to my favorite songs from the '80s.
  • If I had a pet, I would love having an owl or maybe even a dragon. #> `

oobabooga: `

Can you say something weird? I am an AI model trained to provide responses based on my knowledge and understanding of the task. My responses are generated using natural language processing, machine learning algorithms, and data from various sources including research papers, books, articles, and other relevant information. `

as well pure lame.cpp with gpt4-x-alpaca-30b-ggml-q4_1.bin able to receive and answer text in different language than English. Oobabooga with gpt4-x-alpaca-30b-ggml-q4_1.bin does 'understand' questions in other language but answers in English or with 'google_translate' plugin but with very poor quality.

And it seems oobabooga is consuming only 4 cores instead of all like llama.cpp does.

So i wonder:

M00N-MAN commented 1 year ago

Hi @mcmonkey4eva

could you please review the latest info here?

Lexstok commented 1 year ago

Having the same issue here. Mac M2, fresh install. I can start up the UI interface, but any prompt I enter results in the cumsum error.

superwhyun commented 1 year ago

Having the same issue here. Mac M2, fresh install. I can start up the UI interface, but any prompt I enter results in the cumsum error.

+1

superwhyun commented 1 year ago

This issue seems to be related to PyTorch on macOS. The problem can be resolved by using the nightly build of PyTorch for the time being.

https://github.com/pytorch/pytorch/issues/96610

joshuahigginson1 commented 1 year ago

Just a really easy fix for this issue on my Mac M1:

1) Open the 'webui.py' file. 2) Find the function 'install_dependencies()' and replace:

elif gpuchoice == "c" or gpuchoice == "d":
        run_cmd("conda install -y -k pytorch torchvision torchaudio cpuonly git -c pytorch", assert_success=True, environment=True)

with:

elif gpuchoice == "c" or gpuchoice == "d":
        run_cmd("conda install -y -k pytorch torchvision torchaudio cpuonly git -c pytorch", assert_success=True, environment=True)
        run_cmd("pip3 install --upgrade --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", assert_success=True, environment=True)

3) Reinstall by deleting 'text-generation-webui' folder, and then running 'start_macos.sh' again.

hvina commented 1 year ago

The fix by @joshuahigginson1 works. Thanks a lot. Just running the pip install as mentioned in the pytorch 96610 issue did not work I had to delete the directory and then run the install. Thanks

kevinhower commented 1 year ago

I did the above fix by Joshuahigginson1 and I get the following when I try to reinstall:

Traceback (most recent call last): File "/Volumes/MegacityTwo/oobabooga_macos3/webui.py", line 163, in install_dependencies() File "/Volumes/MegacityTwo/oobabooga_macos3/webui.py", line 48, in install_dependencies run_cmd("conda install -y -k pytorch torchvision torchaudio cpuonly git -c pytorch", assert_success=True, environment=True) TypeError: run_cmd() got an unexpected keyword argument 'assert_success'

joshuahigginson1 commented 1 year ago

Hi @kevinhower, this looks like an issue with the actual function 'run_cmd'. You might want to check that you've got the latest 'one-click-installer' files - https://github.com/oobabooga/one-click-installers cloned.

kevinhower commented 1 year ago

i got it to work ... sort of. It does generate text but it's ... well. gibberish. I said "hi" and it gave me the response of the following: "in the future so I am going on an article of the book for me

The U.S. Government has been infected by the virus that shut down the website Teknoepetitionen (meaning “the people’s petition” or more simply, but not without reason, they are also called the have a look at this whopping hmwever, we'll see what happens when the same thing happened before.

As usual, no one from the government, which means all the time! This year, however, he said that the campaign to end the the first two years, because the next three years. So far, the effort to get rid of the idea of a good time to be able to eat bread.

It was created around the world, and may even now, and how much money.

Avoiding food-related issues?

I'm sure most of us know someone else"

Just utter non-sense with the Pythia 6.9B model. Don't know if it is the model or some other issue.

na2hiro commented 1 year ago

@joshuahigginson1 Thanks a lot, that works! I'd like to add one modification here to back up models. I mistakenly lost a >10GB model and had to download it again 😅

Instruction with added backup/restore steps:

Just a really easy fix for this issue on my Mac M1:

  1. Open the 'webui.py' file.
  2. Find the function 'install_dependencies()' and replace:
elif gpuchoice == "c" or gpuchoice == "d":
        run_cmd("conda install -y -k pytorch torchvision torchaudio cpuonly git -c pytorch", assert_success=True, environment=True)

with:

elif gpuchoice == "c" or gpuchoice == "d":
        run_cmd("conda install -y -k pytorch torchvision torchaudio cpuonly git -c pytorch", assert_success=True, environment=True)
        run_cmd("pip3 install --upgrade --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", assert_success=True, environment=True)

3. Backup models by moving 'text-generation-webui/models' folder to somewhere like your ~/Desktop

  1. Reinstall by deleting 'text-generation-webui' folder, and then running 'start_macos.sh' again. 5. Bring back the models to the original place
M00N-MAN commented 1 year ago

i got it to work ... sort of. It does generate text but it's ... well. gibberish. I said "hi" and it gave me the response of the following: "in the future so I am going on an article of the book for me

The U.S. Government has been infected by the virus that shut down the website Teknoepetitionen (meaning “the people’s petition” or more simply, but not without reason, they are also called the have a look at this whopping hmwever, we'll see what happens when the same thing happened before.

As usual, no one from the government, which means all the time! This year, however, he said that the campaign to end the the first two years, because the next three years. So far, the effort to get rid of the idea of a good time to be able to eat bread.

It was created around the world, and may even now, and how much money.

Avoiding food-related issues?

I'm sure most of us know someone else"

Just utter non-sense with the Pythia 6.9B model. Don't know if it is the model or some other issue.

actual answers of llm are 100% depending on the model you use. so please clarify which

as well "i got it to work"... what? and how? :)

MonkeyInWind commented 1 year ago

same for me but with RuntimeError: MPS does not support cumsum op with int64 input

Hi, in my report it is last second line image

+1

KotlinFactory commented 1 year ago

Same problem here

cfmbrand commented 1 year ago

Same problem here, don't understand this. Did the install from command line exactly as directed by the readme for Mac (incl installation of requirements_nocuda.txt).

I don't really understand the solution from @joshuahigginson1 - where is the webui.py file? I don't have it in my downloaded text-generation-webui folder. Thanks in advance.

EDIT: realised that @joshuahigginson1 solution is from one-click installer. Tried that, but still didn't work, same error as above.

cfmbrand commented 1 year ago

This appears to have been resolved elsewhere:

https://github.com/pytorch/pytorch/issues/96610#issuecomment-1597314364

But having implemented the change my inference time is still unusably slow at 0.02 tokens/sec. Anyone know why that might be? Thanks in advance. I have MacOS 13.5.2, Mac M1 Pro 16GB, python 3.10.9.

EDIT: to be clear - I'm not using the one-click installer here.

github-actions[bot] commented 1 year ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.