issues
search
JackKelly
/
neuralnilm_prototype
MIT License
51
stars
22
forks
source link
1D convnet input
#3
Open
JackKelly
opened
9 years ago
JackKelly
commented
9 years ago
[x] try sigmoid activation for Conv1D layer
[x] Try ReLU (it appears that I can't get ReLU to work in any place in the network! I've tried in the Conv1DLayer and the DenseLayers)
[x] convnet (stride 1), then LSTM
[ ] Try using input in range from minus 1 to plus 1
[x] convnet (stride 1), convnet (stride 1), then LSTM
[ ] Try convnet (stride 1), then fully connected, then LSTM.
[ ] layerwise pretraining for the ConvNet layers (autoencoder) #4
[ ] Try larger range
[ ] Read about initialisation for rectifier units
[ ] Try with no LSTM layer and boolean targets.
[ ] Convnets should be easy to visualise
[ ] Try a purely feed-forward network using 1D convolutions and pooling.
Zhang & LeCunn Text understanding from Scratch (2015)
is a fascinating example of this working well. Also see
Lecun 1998
.
[ ] Use linear space to init weights and bias for bottom layer?
[ ] #17