-
- [wikitext-2](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip), [wikitext-103](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip) 는 multiline texts 형…
lovit updated
4 years ago
-
Thanks for publishing the code and basic training instructions!
## Environment
**Datasets:** (9,063 speakers)
- LibriTTS (train-other-500)
- VoxCeleb1
- VoxCeleb2
- OpenSLR (42-44, 61-66, 69…
-
I'd like to ask something about the 'ja_ginza' model provided from this repo.
Currently it contains a pretrained NER model, but I couldn't find documents mentioning how, and/or on what documents, it …
-
We would like to design a recipe which combines
1. fisher+swbd (2100 hours)
2. tedlium (120 hr or 200 hr)
3. librispeech (1000 hr)
4. AMI (100 hr \* 8 distant microphone + 100 hr close talk microphone…
-
not an issue, but maybe interesting for you:
I've setup a build for building binary conan packages (gcc 6,7,8 and clang 7,8,9, shared, for Linux) of the open. Here's the repo of the [recipe](https://…
-
TL;DR: We don't need service endpoints in the DID Document... it's an overly-complicated anti-pattern that has a lot of downsides when we already have patterns that are implemented today that would wo…
-
UD guidelines currently do not specify how to mark document and paragraph boundaries and for many treebanks such information is not available (original text gone, sentences shuffled etc.) But where it…
-
Dears,
Is the any specific preparation can I perform before start to train and translate Arabic corpus ??
Thanks in advance.
-
Anyone who took a dependency on v1-beta should have long ago upgraded to 1.0.0 of the spec. It's been more than 10 years. Converting any existing v1-beta style records to v1 conformance should be tr…
-
Hi guys, is transformer config in 2019-lm-cross-sentence in somewhat memory implementation similar to transformer-xl or different? And what is reason for not using positional encoding?