-
Transformers might be a good alternative for time series compared to LSTM.
Mamba as well (not available in pytorch yet: https://github.com/pytorch/pytorch/issues/120189)
Other ideas: lstm with encod…
acxz updated
2 months ago
-
1.这个模型能实时运行吗?我的意思是流式的方式,即逐帧运行
2.这个模型是否可以用来作为目标人语音提取的骨干网络?作者可有这方面的研究?即TSE(Target speaker extraction),你所说的look2hear,是做TSE使用的吗?
-
-
Something I've been thinking about with expansion of library: a decent amount of the work we've been using involves application of inductive biases and teacher-prompted training to model architecture.…
-
I get Wrong when run "python train.py"
Number of trainable variables: 241
Number of parameters (elements): 34191804
Storage space needed for all trainable variables: 130.43MB
[0427 14:42:35 @bas…
-
**Is your feature request related to a problem? Please describe.**
Your Seq2SeqSharp project already support LSTMs. Please consider to implement the RWKV large language "linear attention" idea into y…
-
**Describe the bug**
first of all, I love the library and **thank you** for open sourcing and maintaining it.
Issue:
I am optimizing a forecasting model with Optuna and the individual trials fin…
-
# lstm_output : [batch_size, n_step, n_hidden * num_directions(=2)], F matrix
def attention_net(self, lstm_output, final_state):
batch_size = len(lstm_output)
hidden_forward=…
-
您好,我想问一下 加入注意机制的地方 lstm_feats, _ = self.attention(lstm_feats,idcnn_out) 其中的idcnn_out是干嘛用的 可以都用lstm_feats吗
-
If I understand the logic correctly then in Luong Decoders forward function:
```
def forward(self, inputs, hidden, encoder_outputs):
#Embed input words
embedded = self.embedding(…