Mozer / talk-llama-fast

Port of OpenAI's Whisper model in C/C++ with xtts and wav2lip
MIT License
708 stars 64 forks source link

Build for mac (Сборка под мак) #6

Open freQuensy23-coder opened 3 months ago

freQuensy23-coder commented 3 months ago

Modern Arm based macbooks are very powerfull and can be used to inference LLMs with acceptable speed without gpu. Can you create a build for MacOs, without cuda, or it is not possible?

d0rc commented 3 months ago

The readme says, "First, you need to compile everything." Whisper.cpp compiles perfectly, so the author may have added something unknown that doesn't compile. I was in the middle of figuring this out when I saw the issue.

d0rc commented 3 months ago

I believe this issue is the same as #1

Mozer commented 3 months ago

What errors do you get?

  1. You need to find and link libcurl library for mac or linux to compile it.
  2. SDL library for linux/mac should also be linked, i think there's was a guide in the original repo about sdl.
  3. In talk-llama.cpp you need to change GetTempPath function to some function in linux/mac to find the 'temp' directory. I think GetTempPath is for windows only.
vovw commented 3 months ago

anyone trying to get this working on mac ??

sokoloveai commented 2 months ago

any updates for mac?

scalar27 commented 1 month ago

same question here. very interested in following this develop.