-
I am building an LSTM autoencoder in R keras with different timestep inputs. As ragged tensors are not implemented yet I opted for masking shorter length inputs. The problem I'm facing is in the bottl…
-
After src/cudadecoderbin/batched-wav-nnet3-cuda2 decoding I rescored with larger n-gram that showed some improvement.
Further rescoring with an RNNLM made results worsen.
In similar issue in https…
-
I'm running the TIMIT LSTM on custom features, and I obtained the following error in my log.log file:
![image](https://user-images.githubusercontent.com/24235462/103419833-ee74d380-4b51-11eb-93c3-3…
-
I tried on librispeech with 4 gpu, always CUDA OOM, even if I use a very small model and very small batch.
Is there a recipe I could follow?
thx
-
Hi,
How does one decode with the models trained using train_transformer_ce.py? Is it possible to provide a decoding recipe or point to resources that can be used to build the recipe?
-
-
log如下所示:
CuDNN: True
GPU available: True
Status: train
Seg: True
Train file: data/ResumeNER/train.char.bmes
Dev file: data/ResumeNER/dev.char.bmes
Test file: data/ResumeNER/test.char.bme…
-
HI, I run the libr/run.sh demo ,but the loss is still so large, the model can't converge.Can you help me? Is it possible to release model configs or trained models?
my env: pytorch 1.5 cuda 10.1 py…
-
I have an idea that I'd like help to implement in nnet3. I'm not sure who to ask to work on this.
It's related to dropout, but it's not normal dropout. In normal dropout we'd normally have a schedu…
-
巨佬您好... 我从去年 follow 您的这篇工作,一直跟到 NAACL19 的 Lattice 分词器,一直在学习您出色的数学理论和优秀的代码功底,一直超崇拜你!但是,一直有一个问题困扰着我,像“南京市长江大桥”这个例子中,“市长” 这个词也会跟随 “长” 字进入 Lattice-LSTM 中去训练,Lattice-LSTM 是如何去处理诸如这样的错误信息呢?另外,论文中与每一个 Xc 做匹配…