Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.58k stars 1.04k forks source link

Bug: Whisperfile: can't turn off translation #599

Open hheexx opened 4 weeks ago

hheexx commented 4 weeks ago

Contact Details

What happened?

I use whisperfile without -tr flag but it translates anyway. How to turn it off?

./whisper-large-v3.llamafile -f ../whisper/2570523.wav

Version

whisperfile v0.8.13

What operating system are you seeing the problem on?

WSL2

Relevant log output

➜  whisperfile ./whisper-large-v3.llamafile -f ../whisper/2570523.wav
whisper_init_from_file_with_params_no_state: loading model from '/zip/ggml-large-v3.bin'
whisper_init_with_params_no_state: cuda gpu   = 0
whisper_init_with_params_no_state: metal gpu  = 0
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw        = 0
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51866
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head  = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 1280
whisper_model_load: n_text_head   = 20
whisper_model_load: n_text_layer  = 32
whisper_model_load: n_mels        = 128
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs       = 100
whisper_model_load:      CPU total size =  3094.36 MB
whisper_model_load: model size    = 3094.36 MB
whisper_init_state: kv self size  =  251.66 MB
whisper_init_state: kv cross size =  251.66 MB
whisper_init_state: kv pad  size  =    7.86 MB
whisper_init_state: compute buffer (conv)   =   36.40 MB
whisper_init_state: compute buffer (encode) =  926.80 MB
whisper_init_state: compute buffer (cross)  =    9.52 MB
whisper_init_state: compute buffer (decode) =  213.32 MB

system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0

main: processing '../whisper/2570523.wav' (3478400 samples, 217.4 sec), 8 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...

[00:00:00.000 --> 00:00:17.960]   Hello, good afternoon. I got a potential buyer. However, when I click on the link, he tells me that the link you followed has expired. I ask you to start the process again. You sent me an email a while ago.
JunkMeal commented 2 weeks ago

I have the same problem whisperfile v0.8.13 from HuggingFace

JunkMeal commented 2 weeks ago

So basically there are additional options found in whisper.cpp then in --help, so this works for me:

./whisper-large-v3.llamafile --output-srt -l de --gpu auto -f output2.mp3

https://github.com/Mozilla-Ocho/llamafile/blob/9b965020c5707e0ab236dca27e4a5a8cafd1d509/whisper.cpp/main.cpp#L160-L209