Closed techpros4125 closed 1 month ago
Hey @techpros4125,
Thanks for reporting!
I believe the engine isn't utilizing the GPU on macOS because there's a crucial file missing. Could you please try downloading the file ggml-medium-encoder.mlmodelc.zip?
Once downloaded, unzip it by clicking on it in Finder. Then, navigate to the settings in Vibe, select 'Open Models Folder,' and drag and drop the unzipped file there (filename should be ggml-medium-encoder.mlmodelc
). After that, try transcribing again. The first attempt might take 2-5 minutes, as it sets up something related to the encoder. Subsequent uses should be faster. Let me know if it starts using the GPU or if it's faster. Thanks!
Hi,
Thanks for prompt reply.
I have downloaded the zip file you sent. Just want to confirm, after unzipped, it is a folder end up with .mlmodelc not a file (I have tried several unzip apps). I also tried several different .mlmdelc.zip file on the hugging face. They did not give me a .mlmodelc file but a folder. And it is still not using GPU even I drag and drop the folder in the model folder.
On Mon, 20 May 2024 at 7:43 pm, thewh1teagle @.***> wrote:
Hey @techpros4125 https://github.com/techpros4125,
Thanks for reporting!
I believe the engine isn't utilizing the GPU on macOS because there's a crucial file missing. Could you please try downloading the file ggml-medium-encoder.mlmodelc.zip https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium-encoder.mlmodelc.zip?download=true ?
Once downloaded, unzip it by clicking on it in Finder. Then, navigate to the settings in Vibe, select 'Open Models Folder,' and drag and drop the unzipped file there (filename should be ggml-medium-encoder.mlmodelc). After that, try transcribing again. The first attempt might take 2-5 minutes, as it sets up something related to the encoder. Subsequent uses should be faster. Let me know if it starts using the GPU or if it's faster. Thanks!
— Reply to this email directly, view it on GitHub https://github.com/thewh1teagle/vibe/issues/70#issuecomment-2120078846, or unsubscribe https://github.com/notifications/unsubscribe-auth/BIQWJS73JRVODJ5UVEPDQPLZDHATZAVCNFSM6AAAAABH7B36WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRQGA3TQOBUGY . You are receiving this because you were mentioned.Message ID: @.***>
@techpros4125
Sorry I meant that you need to drag the .mlmdelc
file to vibe models folder. (not the whole unzipped folder)
Also if you want to get nice logs while trying it you can open vibe from the terminal by execute:
RUST_LOG=vibe /Applications/vibe.app/Contents/MacOS/vibe
I mean, after unzip the file. I did not see any file end up with .mlmodelc but a folder gimlet-medium-encoder.mlmodelc like this
On Mon, 20 May 2024 at 9:08 pm, thewh1teagle @.***> wrote:
@techpros4125 https://github.com/techpros4125 Sorry I meant that you need to drag the .mlmdelc file to vibe models folder. (not the whole unzipped folder)
image.png (view on web) https://github.com/thewh1teagle/vibe/assets/61390950/4a75d91a-ee34-4e75-addc-061d4d329960
— Reply to this email directly, view it on GitHub https://github.com/thewh1teagle/vibe/issues/70#issuecomment-2120222101, or unsubscribe https://github.com/notifications/unsubscribe-auth/BIQWJS5V4IJ46KHSQZ2XBB3ZDHKSXAVCNFSM6AAAAABH7B36WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRQGIZDEMJQGE . You are receiving this because you were mentioned.Message ID: @.***>
I mean, after unzip the file. I did not see any file end up with .mlmodelc but a folder gimlet-medium-encoder.mlmodelc like this … On Mon, 20 May 2024 at 9:08 pm, thewh1teagle @.> wrote: @techpros4125 https://github.com/techpros4125 Sorry I meant that you need to drag the .mlmdelc file to vibe models folder. (not the whole unzipped folder) image.png (view on web) https://github.com/thewh1teagle/vibe/assets/61390950/4a75d91a-ee34-4e75-addc-061d4d329960 — Reply to this email directly, view it on GitHub <#70 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BIQWJS5V4IJ46KHSQZ2XBB3ZDHKSXAVCNFSM6AAAAABH7B36WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRQGIZDEMJQGE . You are receiving this because you were mentioned.Message ID: @.>
When you unzip the zip file, you should have a new folder (or new file named ggml-medium-encoder.mlmodelc
)
The file ggml-medium-encoder.mlmodelc
should be inside this unzipped folder
and then you can drag the file directly to Vibe's models folder (which you can open through Vibe settings)
Hi, I don't know if it helps but same thing for me here.
The GPU is not used at all.
Here is a screenshot showing the Activity Monitor, the folder in which I moved the unzipped file and the logs I get in the console
@florianchevallier
Thanks it helps.
I'm going to release new version soon, which uses the latest version of whisper.cpp
Meanwhile, I can try release pre release for trying if you would like to try
Yeah no problem, tag me when it's released, I'll try it ;-)
I added vibe_1.0.9_coreml_metal_use_gpu_whisper_1.6.2.dmg to vibe/releases//v1.0.9
It uses the latest whisper.cpp
with coreml
and metal
frameworks enabled for GPU optimization
I also have mac with m1
and looks like it uses only 5%
of the GPU. but the transcription is pretty fast. 1 minute of audio in 10 seconds
Hey !
on my side, no difference with the build 20240527.130900 :(
Thanks for checking!
I found why it didn't used the GPU
and fixed + released new version
You can update it through the main window or from https://thewh1teagle.github.io/vibe
In addition if you would like to improve it even more checkout INSTALL.md#macos-with-coreml
Thank you it's much much better ! Transcribed 1:30 hours in a few minutes :-)
What happened?
The transcribing is extremely slow on my M2 Macbook Pro.
Steps to reproduce
I am using an M2 chip Macbook Pro. The transcribing is quite slow. While I check the GPU usage, it shows 0%. Is this normal?
I did not find any setting related to GPU.
What OS are you seeing the problem on?
MacOS
Relevant log output