dnhkng / GlaDOS

This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
MIT License
2.89k stars 273 forks source link

MacOS: No such file or directory: PosixPath('/Users/user/repos/GlaDOS/submodules/llama.cpp/llama-server') #80

Open elo-siema opened 1 month ago

elo-siema commented 1 month ago
➜  GlaDOS git:(main) ✗ ./start_mac.sh         
2024-08-10 16:35:58.106 | SUCCESS  | glados.llama:__init__:85 - Command to start the server: [PosixPath('/Users/user/repos/GlaDOS/submodules/llama.cpp/llama-server'), '--model', PosixPath('/Users/user/repos/GlaDOS/models/Meta-Llama-3-8B-Instruct-Q6_K.gguf'), '--ctx-size', '8192', '--port', '8080', '--n-gpu-layers', '1000']
2024-08-10 16:35:58.106 | INFO     | glados.llama:start:116 - Starting the server by executing command self.command=[PosixPath('/Users/user/repos/GlaDOS/submodules/llama.cpp/llama-server'), '--model', PosixPath('/Users/user/repos/GlaDOS/models/Meta-Llama-3-8B-Instruct-Q6_K.gguf'), '--ctx-size', '8192', '--port', '8080', '--n-gpu-layers', '1000']
Traceback (most recent call last):
  File "/Users/user/repos/GlaDOS/glados.py", line 594, in <module>
    start()
  File "/Users/user/repos/GlaDOS/glados.py", line 573, in start
    llama_server.start()
  File "/Users/user/repos/GlaDOS/glados/llama.py", line 117, in start
    self.process = subprocess.Popen(
                   ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: PosixPath('/Users/user/repos/GlaDOS/submodules/llama.cpp/llama-server')

Fresh install with install_mac.sh, start_mac.sh

ari-kohan commented 3 weeks ago

I ran in to this problem too. It happens because in the install_mac.sh script it uses make server > /dev/null instead of make llama-server > /dev/null You can manually go in to llama.cpp and make llama-server

Looks like there's a pr under review to update the script https://github.com/dnhkng/GlaDOS/pull/77

Ghostboo-124 commented 3 weeks ago

For the best macOS support, you should use my branch at (https://github.com/Ghostboo-124/GlaDOS/tree/macOS)

ari-kohan commented 3 weeks ago

@Ghostboo-124 I was wondering if you've ever run in to this error when running start_mac.sh?

Traceback (most recent call last):
  File "/Users/marissakohan/Development/GlaDOS/glados.py", line 21, in <module>
    from glados import asr, tts, vad
  File "/Users/marissakohan/Development/GlaDOS/glados/asr.py", line 6, in <module>
    from . import whisper_cpp_wrapper
  File "/Users/marissakohan/Development/GlaDOS/glados/whisper_cpp_wrapper.py", line 862, in <module>
    _libs["whisper"] = load_library("whisper")
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/marissakohan/Development/GlaDOS/glados/whisper_cpp_wrapper.py", line 548, in __call__
    raise ImportError("Could not load %s." % libname)
ImportError: Could not load whisper.

It's searching the current paths for the model names, I've tried using both make libwhisper.so -j and just make but no luck so far, it isn't making a whisper build afaict

ari-kohan commented 3 weeks ago

Sorry for the ping, I ended up solving this. The .dylib ad the other files were built in whisper.cpp/src. I copied libwhisper.dylib to whisper.cpp and it worked.