-
What is the reason for not incorporating/benchmarking `BackwardWeights` at least for NVIDIA? There is no use of `cudnnRNNBackwardWeights`.
-
### 📚 The doc issue
The code snippet in PyTorch docs for [`nn.RNN`](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html#torch.nn.RNN) seems to have a mistake.
Inside the `forward` functio…
-
### 🚀 The feature, motivation and pitch
I have a python code segment related to a deep RL algorithm where it calculates the second order optimization and second derivative with Hessian matrix and f…
-
**tl;dr** The basic proposal here is to add a flag to RNN (and subclasses like GRU or LSTM) where instead of running the RNN kernel, it will run the linear, dropout, etc. calls that create an equivale…
-
The model is running on cuda
Traceback (most recent call last):
File "demo.py", line 206, in
sam_pipeline = load_sam(cfg, device)
File "demo.py", line 166, in load_sam
pipelines = …
-
## 🚀 Feature
Allow variable intermediate hidden dimensions for stacked RNN/GRU/LSTM layers.
## Motivation
As it is right now, even thought I can specify `num_layers` to be greater than 1 to…
acxz updated
2 months ago
-
你好,我想使用RWKV用于语音增强,请问如何实现RWKV替换模型中的RNN部分
-
In clang tidy(from rocm 4.1), there is a warning about the functions in rewrite_rnn that exceed the cognitive complexity. There is description of how the congnitive complexity is caclulated [here](htt…
-
### Introduction
This page provides information on implementing complete support of ONNX operators in Shark/IREE front-end. This effort is a part of overall ONNX quality improvement tracked by [#8…
-
I am not sure whether to use LSTM, RNN, or TCN.
Perhaps we can run some performance comparison tests.