Open VuBrian22 opened 1 year ago
Actually im experimenting with whisper x right now bcause my experience with whisper.cpp isnt great. It might have multi gpu support, ill check. If it doesnt though, it should be fine because i heard its incredibly fast
Running the whisper stt model, locally, would negate the first step in using OpenAI's API. Although this specific model (whisper-largev2) will be computationally expensive so multi-gpu support would help alleviate the issue.