dnhkng / GlaDOS

This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
MIT License
2.73k stars 258 forks source link

Fix performance issue when using CUDA provider #51

Closed tn-17 closed 1 month ago

tn-17 commented 1 month ago

I encountered a huge performance issue when running the tts module on my RTX 3090 gpu instead of i9-14900k cpu.

I used some simple code to run single executions of the tts module for testing.

The time elapsed for gpu was over 30 seconds while cpu took only 0.09 - 0.11 seconds.

Based on an article I found, the solution is to change the cudnn_conv_algo_search options from "EXHAUSTIVE" (default) to "HEURISTIC" or "DEFAULT". I went with "HEURISTIC" as the article explains that leads to behavior similar to pyTorch's default. Additionally, both "DEFAULT" and "HEURISTIC" produced the same results. (https://medium.com/neuml/debug-onnx-gpu-performance-c9290fe07459)

Now, my gpu time elapsed time has significantly decreased to taking 0.15 - 0.18 seconds. The gpu is expected to be slightly slower than cpu in this case as there is data being copied to the gpu at the start of inference, and data being copied back to the cpu after inference, which adds some overhead.

There was a page I read that explains adding CPUExecutionProvider (appearing after CUDAExecutionProvider) in the providers list will grant onnxruntime explicit permission to fallback to cpu for operations unsupported in CUDA (all the yellow warning messages). This behavior occurs by default anyways but it is better to explicitly allow this behavior.

Code used for testing


import sounddevice as sd

model_path = "models/glados.onnx"
text = "All neural network modules are now loaded."
rate = 22050

_tts = tts.Synthesizer(model_path=model_path, use_cuda=True)

audio = _tts.generate_speech_audio(text)

sd.play(audio, samplerate=rate)
sd.wait()

print("done")

Inside glados/tts.py I set the start_time at the beginning of the generate_speech_audio function and printed the elapsed time before returning the concatenated audio data.

def generate_speech_audio(self, text: str) -> np.ndarray:
        start_time = time.time()
        phonemes = self._phonemizer(text)
        audio = []
        for sentence in phonemes:
            audio_chunk = self._say_phonemes(sentence)
            audio.append(audio_chunk)
        if audio:
            print("returning audio", time.time() - start_time)
            return np.concatenate(audio, axis=1).T
        return np.array([])

I also added an instruction in the README for using python 3.10 instead of python 3.12.

Summary by CodeRabbit

coderabbitai[bot] commented 1 month ago

[!WARNING]

Rate Limit Exceeded

@tn-17 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 48 minutes and 40 seconds before requesting another review.

How to resolve this issue? After the wait time has elapsed, a review can be triggered using the `@coderabbitai review` command as a PR comment. Alternatively, push new commits to this PR. We recommend that you space out your commits to avoid hitting the rate limit.
How do rate limits work? CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our [FAQ](https://coderabbit.ai/docs/faq) for further information.
Commits Files that changed from the base of the PR and between 43034b1cba1c8a5bf8c39fe2560b1ae419343915 and f81b9208bd3df03e4114b5c85a1dfd37d58441bb.

Walkthrough

The recent update centers on enhancing the Text-to-Speech (TTS) capabilities in glados/tts.py, focusing on optimizing CUDA execution when use_cuda is enabled. The README.md also guides users to install Python 3.12 from the Microsoft Store and provides instructions for using Python 3.10 in glados/llama.py.

Changes

File Change Summary
glados/tts.py Modified provider settings in _initialize_session for improved CUDA execution
README.md Updated Python installation instructions and added guidance for using Python 3.10 with typing_extensions in glados/llama.py

🐇 In code we trust, our voices clear,
With CUDA's might, we persevere.
ONNX now swift, like a hare in flight,
Bringing speech to life, day and night.
🎶 Hopping through data, with joy and delight!


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger a review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
dnhkng commented 1 month ago

This didn't make a difference for me on my 2060, but it didn't make it worse either. So, if its a net positive, I'm happy to accept the PR!

Thanks for finding and fixing the bug on 3090's!