-
Noticing how fast the deep learning ecosystem is moving, with multiple NN implementations in Python, and much of the code in Lua, @ssamot and I are considering moving the project toward a multiple bac…
-
Hi!
I've used char-rnn as a base for a different learning task, which requires me to train the networks on much longer sequences. However I've quickly realized that the training is very unstable and r…
-
Currently, if one creates a network (say a siamese network) where some modules in the network share weights with one another, then once the network is typed to another type like :cuda or :float, then …
-
## Issue A
Proposals:
1. Rename MonitorAccuracy to MonitorClassificationAccuracy. Additionally, later implement MonitorMeanSquaredError etc. We already have a separate MonitorHammingScore.
2. Implem…
-
Since reversePush is not tail recursive, it results in a stack overflow for even simple networks when you push in a lot of data/use a bunch of memory. Here's the code I used to produce this (backprop …
-
As observed in #141 (but also in #110 and https://github.com/benanne/Lasagne/issues/136#issuecomment-75169847), it's not entirely clear what `Layer.get_params()` means and what it should mean.
The do…
-
- [ ] dp.Recurrent() (@daydreamt)
-
Hello, I love it that this project is building higher level abstractions on top of Theano. That should be really useful. What I'm more interested in though is what Blocks provides (or aims to provi…
-
Lasagne `Layers` compute their outputs by calling `get_output()` recursively on their parents. When a network has a tree structure, which is usually the case, this works fine. But when a network has a…