-
-
I met with a Error when running rnn-denoise example, "ValueError: Input 0 is incompatible with layer gru: expected shape=(2048, None, 40), found shape=[32, 1, 40]"
This Error occurs in the function …
-
```
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
…
-
### 🐛 Describe the bug
```torch.onnx.errors.SymbolicValueError: ONNX symbolic expected the output of `%2212 : Tensor = onnx::Squeeze(%2186, %2211), scope: SimpleLSTMNet::/torch.ao.nn.quantized.modu…
-
Hi,
I have been following the examples in RNN-Denoise. However, when testing the quantized model, I noticed that the quantized output tends very fast towards zero:
![image](https://github.com/ma…
-
### 🐛 Describe the bug
nn.RNNBase.flatten_parameters function should be a no-op for export.
Otherwise, export()/dynamo_export fail inside it with this error:
```
File "/usr/local/lib/python3.10…
-
I am following the example set in the README, and I am getting the following error. Is this familiar to anyone?
```
In [1]: from pase.models.frontend import wf_builder …
delip updated
3 months ago
-
Hello. Can you please tell me how to describe a neural network with 2 hidden layers of 256 LSTM cells each and an output layer consisting of one LSTM unit ???
FunctionPtr classifierRoot = features…
-
1. Hello, may I ask, are all the settings mentioned in the paper? In preprocessing step, we adopt Dlib [14] to carry out face
and landmark detection (another detector Openface[4] is adopt in the abla…
-
Hi!
Just wondering how the RNN could be mixed into the `ODEProblem`
In flux times, it seems a Recur layer need to be created. However there is already a `Recurrence` in Lux.jl
[Training of UDEs w…