Closed tn-17 closed 1 month ago
[!WARNING]
Rate Limit Exceeded
@tn-17 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 48 minutes and 40 seconds before requesting another review.
How to resolve this issue?
After the wait time has elapsed, a review can be triggered using the `@coderabbitai review` command as a PR comment. Alternatively, push new commits to this PR. We recommend that you space out your commits to avoid hitting the rate limit.How do rate limits work?
CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our [FAQ](https://coderabbit.ai/docs/faq) for further information.Commits
Files that changed from the base of the PR and between 43034b1cba1c8a5bf8c39fe2560b1ae419343915 and f81b9208bd3df03e4114b5c85a1dfd37d58441bb.
The recent update centers on enhancing the Text-to-Speech (TTS) capabilities in glados/tts.py
, focusing on optimizing CUDA execution when use_cuda
is enabled. The README.md also guides users to install Python 3.12 from the Microsoft Store and provides instructions for using Python 3.10 in glados/llama.py
.
File | Change Summary |
---|---|
glados/tts.py |
Modified provider settings in _initialize_session for improved CUDA execution |
README.md |
Updated Python installation instructions and added guidance for using Python 3.10 with typing_extensions in glados/llama.py |
🐇 In code we trust, our voices clear,
With CUDA's might, we persevere.
ONNX now swift, like a hare in flight,
Bringing speech to life, day and night.
🎶 Hopping through data, with joy and delight!
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
This didn't make a difference for me on my 2060, but it didn't make it worse either. So, if its a net positive, I'm happy to accept the PR!
Thanks for finding and fixing the bug on 3090's!
I encountered a huge performance issue when running the tts module on my RTX 3090 gpu instead of i9-14900k cpu.
I used some simple code to run single executions of the tts module for testing.
The time elapsed for gpu was over 30 seconds while cpu took only 0.09 - 0.11 seconds.
Based on an article I found, the solution is to change the
cudnn_conv_algo_search
options from "EXHAUSTIVE" (default) to "HEURISTIC" or "DEFAULT". I went with "HEURISTIC" as the article explains that leads to behavior similar to pyTorch's default. Additionally, both "DEFAULT" and "HEURISTIC" produced the same results. (https://medium.com/neuml/debug-onnx-gpu-performance-c9290fe07459)Now, my gpu time elapsed time has significantly decreased to taking 0.15 - 0.18 seconds. The gpu is expected to be slightly slower than cpu in this case as there is data being copied to the gpu at the start of inference, and data being copied back to the cpu after inference, which adds some overhead.
There was a page I read that explains adding
CPUExecutionProvider
(appearing afterCUDAExecutionProvider
) in the providers list will grant onnxruntime explicit permission to fallback to cpu for operations unsupported in CUDA (all the yellow warning messages). This behavior occurs by default anyways but it is better to explicitly allow this behavior.Code used for testing
Inside
glados/tts.py
I set the start_time at the beginning of thegenerate_speech_audio
function and printed the elapsed time before returning the concatenated audio data.I also added an instruction in the README for using python 3.10 instead of python 3.12.
Summary by CodeRabbit
New Features
use_cuda
is enabled.Documentation