Open dazzng opened 11 months ago
It does run on macOS. However you have to use python3 so you have to install python3 via brew
.
Instead of pip
use pip3
.
And yes, you need to run it in the terminal.
thank you I will try this and come back to you again if problems come up
I couldn't figure out: what are supported --compute_type
on M1 Mac (ARM64) ?
Just use auto
. The same I use for x86-64 platform. It will autoselect the best value for specific platform.
does this only work for mp3? I have m4a file recorded on my phone.
Also, where do I need to exceute the code, whisper-ctranslate2 (file name) --model large
when I run the command on the folder where the recording is located, it says no command of whisper-ctranslate2 is found
If you installed python3
via brew install python3
, it should work, otherwise I cannot help you since you did something different from what I did on my machine.
You have to install brew
from www.brew.sh
then install python3 with this command brew install python3
, then you can install whisper-ctranslate2 by doing this command pip3 install -U whisper-ctranslate2
. Then you have to exit the terminal and launch the terminal again and you'll have access to whisper-ctranslate2
command. You can verify the version of the whisper-ctranslate2 by doing the command whisper-ctranslate2 --version
which should show the output whisper-ctranslate2 0.3.4
(version 0.3.4 as of today, it should be 0.3.5 if you used pip3 install git+https://github.com/jordimas/whisper-ctranslate2.git
).
It doesn't matter which audio is of any particular format, whisper-ctranslate2 automatically converts to 16KHz mono audio internally for the inference engine to transcribe.
You can also specify the path to where the audio file is located by prepending the absolute path to the filename.
Where you launch the command whisper-ctranslate2
(the current folder) will contain the transcription or the subtitle of the video saved as <name of the video>.vtt
.
If the instructions are too much for you, you could look at this macos app called WhisperScript it uses internally the code from faster-whisper which is also based on whisper-ctranslate2.
thank you so much for the explanation. I did as per your instructions but I get this error when I install whisper-ctranslate2.
You'll have to do brew install pkg-config
, cause that's what is missing for the building of PyAV per error message.
Are you on a M1 or M2 machine?
I am on a m2 pro machine.
do you think it installed properly?
Yes, it is installed correctly. However you have outdated brew formulae which means you have to do brew update && brew upgrade
.
ok thanks I did them but I still get this error when I run the command "pip3 install -U whisper-ctranslate2"
also this:
Sorry, that error is coming from trying to build av-10.0.0
. I have no idea how to help you with this error since I don't have access to arm based macs.
From what I see across the projects that do the whisper ai models for speech to text inference transcription like whisper.cpp/faster-whisper/whisper-ctranslate2, the code is running on Intel based macs primarily.
Your best chance to have it running is to try the app called WhisperScript, it uses the same code as faster-whisper which also is based off of whisper-ctranslate2. The link for the app is in one of my above replies. WhisperScript will run natively on arm based macs.
ok thank you. So Whisper script is more or less the same thing as what I am trying to do here?
Yes, it's a GUI based app. You don't need anymore to use the terminal.
@dazzng Wasn't you running faster-whisper from my repo? This is practically the same thing, it won't work on your GPU.
yeah I was just trying alternatives.
yeah I was just trying alternatives.
I have an M1 and use whisper.cpp with large and medium Core ML models. First time large is run it takes 15 1/2 hours till it starts and first time medium takes 2 to 3 hours if I remember correctly. I didn't want to compile them with XCode and get a developer account so just downloaded precomipled models but still takes that extra "first time run".
I get around 1.7x realtime speed on large and about 3.5x realtime speed with medium. I use -ng (no graphics) for large since I only got 8GB RAM and it just goes a tad slower but at least I can still use the laptop.
st=$SECONDS && for f in *.opus ; do ffmpeg -hide_banner -i "$f" -f wav -ar 16000 -ac 1 - | nice ~/whisper/whisper.cpp-1.5.4/./main -m ~/whisper/whisper.cpp-1.5.4/models/ggml-$setmodel.bin - -ovtt -of "$f" -t 8 -l "$setlanguage" $translate -ml $maxlength -sow $setprintcolors $setng ; for f in *.vtt ; do sed -r -i .bak -e "s|\ballah\b|Allah|g" -e's|\[BLANK_AUDIO\]||g' "$f" ; done && for i in *opus.vtt ; do mv -i -- "$i" "$(printf '%s' "$i" | sed '1s/.opus.vtt/.vtt/')" ; [ ! -d vttsubs ] && mkdir vttsubs/ ; mv *.vtt vttsubs/ ; done && rm *.bak ; done && secs=$((SECONDS-st)); printf '\nwhisper.cpp took %02dh:%02dm:%02ds\n' $(($secs/3600)) $(($secs%3600/60)) $(($secs%60))
Can you run this on Macos?
If so, where do I type this code: pip install -U whisper-ctranslate2
Can I open a terminal inside the folder where the files are located and run this code or do I have to install phyton?