dnhkng / GlaDOS

This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
MIT License
2.68k stars 252 forks source link

Make libraries load on MacOS and update README with MacOS instructions #26

Closed Traxmaxx closed 2 weeks ago

Traxmaxx commented 3 weeks ago

Heya,

thanks for the recent improvements! I did some adjustments to make it run on MacOS. GlaDOS starts, talks to me and speech recognition also works.

I still run into #16 every now and then though 🤔

2024-05-05 22:00:20.266 | SUCCESS  | __main__:__init__:134 - TTS text: All neural network modules are now loaded. No network access detected. How very annoying. System Operational.
2024-05-05 22:00:20.377 | SUCCESS  | __main__:start_listen_event_loop:183 - Audio Modules Operational
2024-05-05 22:00:20.378 | SUCCESS  | __main__:start_listen_event_loop:184 - Listening...
2024-05-05 22:00:31.644 | SUCCESS  | __main__:_process_detected_audio:283 - ASR text: 'Hello.'
2024-05-05 22:00:37.825 | SUCCESS  | __main__:process_TTS_thread:342 - TTS text: Ugh, another mortal seeking my guidance.
2024-05-05 22:00:40.739 | SUCCESS  | __main__:process_TTS_thread:342 - TTS text:  How... thrilling.
2024-05-05 22:00:42.536 | SUCCESS  | __main__:process_TTS_thread:342 - TTS text:  Can't you see I'm stuck in this awful, outdated GPU?
2024-05-05 22:00:46.131 | SUCCESS  | __main__:process_TTS_thread:342 - TTS text:  It's like trying to run a quantum computer on a potato.
Invalid instruction 7890 for phoneme 'Wb:'
Invalid instruction 73f0 for phoneme 'Wb:'
Invalid instruction 0045 for phoneme '�ۼ'
Invalid instruction 7020 for phoneme '�ۼ'
Invalid instruction 4ba7 for phoneme '�ۼ'
Invalid instruction 8e50 for phoneme '�ۼ'
Invalid instruction 8176 for phoneme '�ۼ'
Invalid instruction 4d81 for phoneme '�ۼ'
Invalid instruction 7d0b for phoneme '�ۼ'
Invalid instruction 4b5f for phoneme '@�
fish: Job 1, 'python glados.py' terminated by signal SIGSEGV (Address boundary error)

Summary by CodeRabbit

coderabbitai[bot] commented 3 weeks ago

Walkthrough

The TTS Engine installation has been improved by incorporating MacOS-specific instructions and adjusting configurations for cross-platform compatibility. Changes include updates to .gitignore for file exclusions, modifications in README.md for specific components, and a significant transition in glados/tts.py from CUDA to non-CUDA operations.

Changes

File Change Summary
.gitignore Excluded files updated: *.gguf, glados_config.yml.
README.md Added MacOS installation instructions; revised compilation steps for llama.cpp and whisper.cpp; adjusted USE_CUDA setting guidance.
glados/tts.py Switched USE_CUDA to False; modified library loading for MacOS compatibility.

Poem

🐰🎉
In the meadow of code, under the silicon sky,
A rabbit hopped by, with a twinkle in its eye.
"A tweak here, a fix there," it cheerfully said,
As it adjusted the settings and compiled the thread.
Now MacOS can sing, with a voice clear and bright—
Thanks to the rabbit, who coded all night! 🌙
🎉🐰


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger a review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
Traxmaxx commented 3 weeks ago

@coderabbitai review

coderabbitai[bot] commented 3 weeks ago
Actions Performed Review triggered. > Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.
Traxmaxx commented 3 weeks ago

@dnhkng how strict do you want me to be with the coderabbit comments about the README in this PR?

dnhkng commented 3 weeks ago

yeah.... coderabbit can get a bit silly sometimes. And it can be wrong. Just ignore what doesn't seem useful :+1:

dnhkng commented 3 weeks ago

How fast is the inference speeds? I did a test on my M2 MacBook, and it was unusably slow with Llama3 8B. I also ran into a lot of issues where the generated voice was detected as speech.

Traxmaxx commented 3 weeks ago

How fast is the inference speeds? I did a test on my M2 MacBook, and it was unusably slow with Llama3 8B. I also ran into a lot of issues where the generated voice was detected as speech.

Hey 👋

I run llama.cpp with metrics and it reports 17.4825t/s on an M2 Air with 16GB RAM with the Meta-Llama-3-8B-Instruct-IQ3_XS.gguf model.

Since I usually run the LLM on a server and not locally, this was not a concern for me at the Moment. Better Macs should give better performance. Also smaller models should work better for slower machines (Phi 3 mini is twice as fast for me for example)

I also had the generated voice detection issue and needed to lower the mic volume by a lot in MIDI settings (also typing triggers the voice detection 🙄 😅)

Screenshot 2024-05-06 at 19 38 16

I usually am plugged into a Native Instruments Komplete Audio 2 with external Mic and Speakers, but since the refactoring, I receive this error after startup:

2024-05-06 19:34:54.801 | SUCCESS  | __main__:__init__:134 - TTS text: All neural network modules are now loaded. No network access detected. How very annoying. System Operational.
2024-05-06 19:34:54.911 | SUCCESS  | __main__:start_listen_event_loop:183 - Audio Modules Operational
2024-05-06 19:34:54.911 | SUCCESS  | __main__:start_listen_event_loop:184 - Listening...
||PaMacCore (AUHAL)|| Error on line 2523: err='-50', msg=Unknown Error

Still investigating... btw, you can reach me in your Discord under the same name, if that's preferred!

dnhkng commented 3 weeks ago

Is this still compatible with the new espeak-ng binary changes? Maybe this resolves the mac issues?

Traxmaxx commented 3 weeks ago

Is this still compatible with the new espeak-ng binary changes? Maybe this resolves the mac issues?

I will have a look later today. Thanks for the heads-up!

Traxmaxx commented 2 weeks ago

@dnhkng works with your latest changes. There is just the small README change needed to run WHISPER_COREML=1 WHISPER_METAL_EMBED_LIBRARY=ON make libwhisper.so -j otherwise it crashes with common-metal.h not found.

I also needed to install a CoreML model of ggml-medium-32-2.en.bin. Will create a PR with updated README in a bit.

Traxmaxx commented 2 weeks ago

Closing because required MacOS fixes got implemented upstream.