-
i have trained this for over 23k steps, but when using synthesis.py, the result seems empty. And i found that the generated mag to be normal. Can anyone tell me how to solve this problem?
-
I see the heatmap in this paper Figure 10 and [RMT](https://arxiv.org/pdf/2304.11062.pdf) Figure 6
I have some question:
1) many LLMs are casual. attention heatmap is bidirectional attention in R…
-
Thanks fro sharing the repo. I have trained the model using this repo on LJ speech. I am performing inference using only GST. During inference i use a out of dataset file as style file. The synthesiz…
-
Hi, I just compiled the newest master branch and tried to run `06-fused-attention.py` in tutorial by uncommenting the last line
```
bench_flash_attention.run(save_path='.', print_data=True)
```
Ho…
nynyg updated
2 years ago
-
I am currently reworking the flash attention benchmarking script provided [here](https://github.com/Dao-AILab/flash-attention/blob/main/benchmarks/benchmark_flash_attention.py). (This script was used …
-
### Summary
Right now `rica` crashes when trying to make plots if no data is found, e.g., with carpet plots.
### Next Steps
Handle these exceptions and show a message on screen saying…
-
I tried to apply autoreject to a set of MEG data after epoching. One of my channels is very noisy (MEG 017), so I paid special attention to how autoreject handles this channel. While the majority of e…
-
Thanks for this promising application. Sadly I can't use it as it simply doesn't open/crashes on startup. When running from cli, I get:
```
$ flatpak run com.github.alexhuntley.Plots
(process:2)…
-
https://github.com/hirofumi0810/neural_sp/blob/78fa843e7f9b27b93a57099104db49d481ff95bb/neural_sp/models/seq2seq/decoders/transformer.py#L190-L194
hey, I notice there'll be `mocha_first_layer - 1` …
-
Probably we should restructure our different test cases to cover more of our codebase.
What I have in mind, is to write a test case for each `drawXXX` function that covers that feature as completely a…