-
Hey there,
A good example to have in the docs is replicating the original work that introduces the concept. I often do this in DynamicalSystems.jl and I wanted to contribute this here as well. This…
-
[paper](https://arxiv.org/pdf/2103.14574.pdf) :
Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
![image](https://user-images.githubusercontent.com…
-
https://github.com/facebookresearch/fairscale
> FairScale is a PyTorch extension library for high performance and large scale training.
看上去是几个module的并集.
-
Hello,
I've been going through the examples and trying to adapt TFTModel and a few of the other neural network models to my own dataset, which has multiple time series with past and future covariat…
-
I looked at the difference between an autoregressive vs non-autoregressive in transformer architecture. but I am wondering whether the attention layer in TensorFlow is actually autoregressive? or do I…
-
The autoregressive nature makes inference difficult to parallelize and leads to high decoding latency. Have you noticed new researches in non-autoregressive decoding area? I found this: https://arxiv.…
-
# 🐛 Bug
I get an error when the sequence lengths to the encoder and decoder are different, e.g. in the code snippet below:
## Command
```py
EMB = 384
SEQ_ENC = 128
SEQ_DEC = 64
BATCH = 16…
-
Hi! Thank you for your wonderful work and for releasing the code publicly! Could you please tell whether there is an easy way (from the implementation perspective) to generate videos longer than 16 fr…
-
Hello,
I'm trying to train conformer_ctc_medium model from scratch. I use the config file from the pretrained model stt_en_conformer_ctc_medium_ls and only change the manifest_filepath and batch_si…
-
When using `x` and `mask` that have batch size larger than 1 following error is arises:
```python
import torch
from h_transformer_1d import HTransformer1D
model = HTransformer1D(
num_toke…