-
Hi, first of all thanks for sharing your improved codes!
While I was testing your jupyter notebook on CoLab, I have the following problems:
Firstly, I would like to know if you are certain that …
-
I did a very rough comparison of https://github.com/guillaumekln/faster-whisper and whisper.cpp, turns out faster-whisper is faster than whisper.cpp in CPU.
For eg. It takes faster-whisper 14second…
-
The major contribution of the paper: MQS Loss, why optimizing such an objective can potentially encourage diverse generation?
Why maximizing question similarity could lead to more diverse results, …
-
Thank you for your hard efforts Ecogen team,
I've been reading your paper and looking at your repo, and I think its very cool!
However, I'm still having a little trouble with understanding how the…
-
th rnn2rnn.lua
using CUDA on GPU 0...
one-time setup: preprocessing input text file book_corpus_small/input.txt...
creating vocabulary mapping...
putting data into tensor...
saving book_corpu…
-
Salutation vigognaise et vigonais,
Je me demande avec beaucoup de curiosité quelle sont les différences entre les deux approche de finetuning avec ou sans seq2seq transformation ? (train/train_sft.…
-
thanks for your code. This code uses SWA which is not mentioned in your paper, and i find there is a big performance drop without SWA, and the performance is same as baseline, i.e. training with bce …
-
Hi,
I want to finetune the model on my own dataset. How should I prepare the stage1 and stage2 training data? What is the difference? The description of caption_stage1_train.tsv and caption_stage2_t…
-
-
## Description
I am currently working on a project that is built in Unity where I am modulating voices (e.g. source speech → voice modulator → target speech (elf)). I currently have an E2E flow with …