SociallyIneptWeeb / LanguageLeapAI

Your Personal Multilingual AI Translator
MIT License
843 stars 171 forks source link

whisper error #102

Open ThunderSilver opened 1 year ago

ThunderSilver commented 1 year ago

I get this when i try to start whisper :

2023-05-18 16:30:08 [2023-05-18 14:30:08 +0000] [8] [INFO] Booting worker with pid: 8 2023-05-18 16:30:14 [2023-05-18 14:30:14 +0000] [8] [ERROR] Exception in worker process 2023-05-18 16:30:14 Traceback (most recent call last): 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker 2023-05-18 16:30:14 worker.init_process() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/uvicorn/workers.py", line 66, in init_process 2023-05-18 16:30:14 super(UvicornWorker, self).init_process() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/base.py", line 134, in init_process 2023-05-18 16:30:14 self.load_wsgi() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi 2023-05-18 16:30:14 self.wsgi = self.app.wsgi() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi 2023-05-18 16:30:14 self.callable = self.load() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 58, in load 2023-05-18 16:30:14 return self.load_wsgiapp() 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp 2023-05-18 16:30:14 return util.import_app(self.app_uri) 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/gunicorn/util.py", line 359, in import_app 2023-05-18 16:30:14 mod = importlib.import_module(module) 2023-05-18 16:30:14 File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module 2023-05-18 16:30:14 return _bootstrap._gcd_import(name[level:], package, level) 2023-05-18 16:30:14 File "", line 1050, in _gcd_import 2023-05-18 16:30:14 File "", line 1027, in _find_and_load 2023-05-18 16:30:14 File "", line 1006, in _find_and_load_unlocked 2023-05-18 16:30:14 File "", line 688, in _load_unlocked 2023-05-18 16:30:14 File "", line 883, in exec_module 2023-05-18 16:30:14 File "", line 241, in _call_with_frames_removed 2023-05-18 16:30:14 File "/app/app/webservice.py", line 66, in 2023-05-18 16:30:14 faster_whisper_model = WhisperModel(faster_whisper_model_path, device="cuda", compute_type="float16") 2023-05-18 16:30:14 File "/app/.venv/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 101, in init 2023-05-18 16:30:14 self.model = ctranslate2.models.Whisper( 2023-05-18 16:30:14 ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation. 2023-05-18 16:30:14 [2023-05-18 14:30:14 +0000] [8] [INFO] Worker exiting (pid: 8) 2023-05-18 16:30:14 output directory /root/.cache/faster_whisper/small already exists, use --force to override 2023-05-18 16:30:15 [2023-05-18 14:30:15 +0000] [7] [INFO] Shutting down: Master 2023-05-18 16:30:15 [2023-05-18 14:30:15 +0000] [7] [INFO] Reason: Worker failed to boot.

This causes the whole thing not to work as the translator doesn't get any input from whisper