pluja / whishper

Transcribe any audio to text, translate and edit subtitles 100% locally with a web UI. Powered by whisper models!
https://whishper.net
GNU Affero General Public License v3.0
1.57k stars 87 forks source link

[BUG] Error transcribing [...] dial tcp 127.0.0.1:8000: connect: connection refused" #116

Closed Norgus closed 1 week ago

Norgus commented 2 months ago

Description

Cannot transcribe with very vanilla install, all default settings (127.0.0.1 / localhost, port 8082 etc)

To Reproduce

Steps to reproduce the behavior:

  1. Install as per instructions on the https://whishper.net/guides/install/ and GPU support pages
  2. Open http://localhost:8082/ (or localhost, or my LAN IP, no difference)
  3. Add a new transcription job, either CPU or GPU, doesn't change result.
  4. Get back error: "id-66bbc0503b93a6926ab262d2 Failed: Could not transcribe"

Expected behavior

The audio to get fed through a whisper model and be transcribed.

Environment

Logs and Configuration

Docker Compose Logs

Run the following command in the project folder, force the error, and paste the logs below: docker compose logs -f --tail 50

whisper-libretranslate  | Updating language models
whisper-libretranslate  | Updating language models
whishper                | 2024-08-13 19:32:41,075 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
whishper                | 2024-08-13 19:32:41,075 INFO supervisord started with pid 1
whishper                | 2024-08-13 19:32:42,078 INFO spawned: 'backend' with pid 7
whishper                | 2024-08-13 19:32:42,079 INFO spawned: 'frontend' with pid 8
whishper                | 2024-08-13 19:32:42,080 INFO spawned: 'nginx' with pid 9
whishper                | 2024-08-13 19:32:42,081 INFO spawned: 'transcription' with pid 10
whishper                | 2024-08-13 19:32:43,128 INFO success: backend entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 19:32:43,128 INFO success: frontend entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 19:32:43,128 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 19:32:43,128 INFO success: transcription entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 20:01:49,427 WARN received SIGTERM indicating exit request
whishper                | 2024-08-13 20:01:49,427 INFO waiting for backend, frontend, nginx, transcription to die
whishper                | 2024-08-13 20:01:50,442 INFO stopped: transcription (terminated by SIGTERM)
whishper                | 2024-08-13 20:01:50,444 INFO stopped: nginx (exit status 0)
whishper                | 2024-08-13 20:01:50,448 INFO stopped: frontend (terminated by SIGTERM)
whishper                | 2024-08-13 20:01:50,458 INFO stopped: backend (terminated by SIGTERM)
whishper                | 2024-08-13 20:02:00,703 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
whishper                | 2024-08-13 20:02:00,704 INFO supervisord started with pid 1
whishper                | 2024-08-13 20:02:01,706 INFO spawned: 'backend' with pid 7
whishper                | 2024-08-13 20:02:01,707 INFO spawned: 'frontend' with pid 8
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.451+00:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.451+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.451+00:00"},"s":"I",  "c":"CONTROL",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":3}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"TENANT_M", "id":5093807, "ctx":"SignalHandler","msg":"Shutting down all TenantMigrationAccessBlockers on global shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"TenantMigrationBlockerNet","msg":"Killing all outstanding egress activity."}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"ASIO",     "id":6529201, "ctx":"SignalHandler","msg":"Network interface redundant shutdown","attr":{"state":"Stopped"}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"SignalHandler","msg":"Killing all outstanding egress activity."}
whishper                | 2024-08-13 20:02:01,708 INFO spawned: 'nginx' with pid 9
whishper                | 2024-08-13 20:02:01,709 INFO spawned: 'transcription' with pid 10
whishper                | 2024-08-13 20:02:02,758 INFO success: backend entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 20:02:02,758 INFO success: frontend entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 20:02:02,758 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
whishper                | 2024-08-13 20:02:02,758 INFO success: transcription entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"SignalHandler","msg":"Shutting down the ServiceEntryPoint"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"CONTROL",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"INDEX",    "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"INDEX",    "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"CONTROL",  "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"CONTROL",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.452+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.458+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,use_timestamp=false,"}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.459+00:00"},"s":"I",  "c":"WTCHKPT",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1723577561,"ts_usec":459371,"thread":"29:0x7f2f37e00640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 46, snapshot max: 46 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.569+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1723577561,"ts_usec":569783,"thread":"29:0x7f2f37e00640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 111 milliseconds"}}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.569+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1723577561,"ts_usec":569867,"thread":"29:0x7f2f37e00640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 111ms, including 0ms for the rollback to stable, and 111ms for the checkpoint."}}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.601+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":143}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.601+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.601+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.601+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.602+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.602+00:00"},"s":"I",  "c":"CONTROL",  "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"0 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"149 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"151 ms"}}}}
mongo-1                 | {"t":{"$date":"2024-08-13T19:32:41.602+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
mongo-1                 | 
mongo-1                 | MongoDB init process complete; ready for start up.
mongo-1                 | 
mongo-1                 | {"t":{"$date":"2024-08-13T20:02:00.443Z"},"s":"I",  "c":"CONTROL",  "id":20697,   "ctx":"main","msg":"Renamed existing log file","attr":{"oldLogPath":"/var/log/mongodb/mongod.log","newLogPath":"/var/log/mongodb/mongod.log.2024-08-13T20-02-00"}}

Docker Compose File

version: "3.9"

services:
  mongo:
    image: mongo
    env_file:
      - .env
    restart: unless-stopped
    volumes:
      - ./whishper_data/db_data:/data/db
      - ./whishper_data/db_data/logs/:/var/log/mongodb/
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${DB_USER:-whishper}
      MONGO_INITDB_ROOT_PASSWORD: ${DB_PASS:-whishper}
    expose:
      - 27017
    command: ['--logpath', '/var/log/mongodb/mongod.log']

  translate:
    container_name: whisper-libretranslate
    image: libretranslate/libretranslate:latest-cuda
    restart: unless-stopped
    volumes:
      - ./whishper_data/libretranslate/data:/home/libretranslate/.local/share
      - ./whishper_data/libretranslate/cache:/home/libretranslate/.local/cache
    env_file:
      - .env
    user: root
    tty: true
    environment:
      LT_DISABLE_WEB_UI: True
      LT_LOAD_ONLY: ${LT_LOAD_ONLY:-en,fr,es}
      LT_UPDATE_MODELS: True
    expose:
      - 5000
    networks:
      default:
        aliases:
          - translate
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]

  whishper:
    pull_policy: always
    image: pluja/whishper:${WHISHPER_VERSION:-latest-gpu}
    env_file:
      - .env
    volumes:
      - ./whishper_data/uploads:/app/uploads
      - ./whishper_data/logs:/var/log/whishper
    container_name: whishper
    restart: unless-stopped
    networks:
      default:
        aliases:
          - whishper
    ports:
      - 8082:80
    depends_on:
      - mongo
      - translate
    environment:
      PUBLIC_INTERNAL_API_HOST: "http://127.0.0.1:80"
      PUBLIC_TRANSLATION_API_HOST: ""
      PUBLIC_API_HOST: ${WHISHPER_HOST:-}
      PUBLIC_WHISHPER_PROFILE: gpu
      WHISPER_MODELS_DIR: /app/models
      UPLOAD_DIR: /app/uploads
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]

whishper_data/logs/backend.err.log

ends in:

8:13PM ERR Error transcribing [0m36merror="Post \"http://127.0.0.1:8000/transcribe?model_size=small&task=transcribe&language=auto&device=cuda\": dial tcp 127.0.0.1:8000: connect: connection refused"

FWIW my computer doesn't report any process listening on tcp 8000

Norgus commented 2 months ago

hi, do you still want to see the env file? I got an email notification of your message, but can't see it on github, so maybe you deleted the response?

On Wed, 28 Aug 2024, 03:00 Yu Sheng, @.***> wrote:

Can you share your .env file? Its strange that you are trying to transcribe in port 8000

— Reply to this email directly, view it on GitHub https://github.com/pluja/whishper/issues/116#issuecomment-2313939461, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA7XGC4Z24K6JHCANMQT4TZTUVKLAVCNFSM6AAAAABMPBIOYKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJTHEZTSNBWGE . You are receiving this because you authored the thread.Message ID: @.***>

NotYuSheng commented 2 months ago

hi, do you still want to see the env file? I got an email notification of your message, but can't see it on github, so maybe you deleted the response? On Wed, 28 Aug 2024, 03:00 Yu Sheng, @.> wrote: Can you share your .env file? Its strange that you are trying to transcribe in port 8000 — Reply to this email directly, view it on GitHub <#116 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA7XGC4Z24K6JHCANMQT4TZTUVKLAVCNFSM6AAAAABMPBIOYKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJTHEZTSNBWGE . You are receiving this because you authored the thread.Message ID: @.>

My bad, my comment was wrong so i deleted it.

NotYuSheng commented 2 months ago

Potential fix for you issue:

Stop your containers, delete the whishper_data folder, and run this command:

sudo mkdir -p ./whishper_data/libretranslate/{data,cache}
sudo chown -R 1032:1032 whishper_data/libretranslate
Norgus commented 2 months ago

I gave it a go, restarted the PC after to be sure, but I still get a transcription error when I try to test it. Here's all the new logs.

$ cat whishper_data/logs/*                                                                                                                                                                                                                         (base)
File: whishper_data/logs/backend.err.log
7:26AM INF Starting monitor!
7:26AM ERR Error sending transcription request [0m36merror="Post \"http://127.0.0.1:8000/transcribe?model_size=small&task=transcribe&language=auto&device=cuda\": dial tcp 127.0.0.1:8000: connect: connection refused"
7:26AM ERR Error transcribing [0m36merror="Post \"http://127.0.0.1:8000/transcribe?model_size=small&task=transcribe&language=auto&device=cuda\": dial tcp 127.0.0.1:8000: connect: connection refused"
7:28AM ERR Error sending transcription request [0m36merror="Post \"http://127.0.0.1:8000/transcribe?model_size=large-v2&task=transcribe&language=auto&device=cuda\": dial tcp 127.0.0.1:8000: connect: connection refused"
7:28AM ERR Error transcribing [0m36merror="Post \"http://127.0.0.1:8000/transcribe?model_size=large-v2&task=transcribe&language=auto&device=cuda\": dial tcp 127.0.0.1:8000: connect: connection refused"

File: whishper_data/logs/backend.out.log

┌───────────────────────────────────────────────────┐
│                   Fiber v2.50.0                   │
│               http://127.0.0.1:8080               │
│       (bound on host 0.0.0.0 and port 8080)       │
│                                                   │
│ Handlers ............ 13  Processes ........... 1 │
│ Prefork ....... Disabled  PID ................. 7 │
└───────────────────────────────────────────────────┘

File: whishper_data/logs/frontend.err.log   <EMPTY>

File: whishper_data/logs/frontend.out.log
Listening on 0.0.0.0:3000

File: whishper_data/logs/nginx.err.log   <EMPTY>

File: whishper_data/logs/nginx.out.log   <EMPTY>

File: whishper_data/logs/transcription.err.log
An error occured while synchronizing the model Systran/faster-whisper-tiny from the Hugging Face Hub:
Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
Trying to load the model directly from the local cache, if it exists.

File: whishper_data/logs/transcription.out.log   <EMPTY>
Norgus commented 2 months ago

FWIW, my browser console gives a regular error about parsing JSON from http://localhost:8082/languages - but when I test this URL, I get a 502 Bad Gateway nginx error page.

MarslMarcello commented 1 week ago

I'm afraid the yt-dl stuff is not working. I can transcribe downloaded videos by choose file, but whenever I pastw a yt link I'm ending up with the error " "Error transcribing [...] dial tcp 127.0.0.1:8000: connect: connection refused"

1977Mario commented 1 week ago

I guess you need to download the libretranslate languagepacks. That solved my issue. Also i see you are using port 8000. Unless it is customized in the .env file it should be port 8082. If that doesn't help you can try giving the configuration the same ip as your PC that helps sometimes http://127.0.0.1:8082 should however do the trick

Norgus commented 1 week ago

I don't understand why my browser console log has the error about port 8000, as I did not customise the config and indeed, access the page itself successfully on port 8082.

I think I'll close the issue, since it doesn't seem to be helping get to the bottom of anything and is attracting unrelated responses.

MarslMarcello commented 1 week ago

@1977Mario I did not change anything in the env file or ports. It worked before. I have the feeling since my docker pulled images new that it seems it doesn't work anymore with the YT DL. The "libretranslate languagepacks" are downloaded. With uploading files, it works w/o any issues. I will take a look in details and see what I can find.