Closed jeammimi closed 7 years ago
Take a look at #1456. The TimeDistributed
wrapper converts a layer to its time distributed version.
Yes, I allready implementented one following this Timedistributed layer, but I have two option: Add this TimeDistribution as an option to the regular Convolution2D, or to create a new one. My question was more If it is ok to ask for a pull with two new layers: the TimeDistributedConv2D and the LSTMConv2D ( recurrent convolutionnal)
A sorry, I just read carrefully what you said, and I will have a look to the wrapper you pointed to me.
hello @jeammimi i'm also interested on implementing that model on Keras but i have no experience on ConvNets + LSTM. Can you give me some steps how did you build the network? That would help me a lot.
Hello, I am trying to clean the code and to do a pull request. But the basic step where to replace the dot product by a convolution product.
2016-02-21 2:36 GMT+01:00 jasper95 notifications@github.com:
hello @jeammimi https://github.com/jeammimi i'm also interested on implementing that model on Keras but i have no experience on ConvNets + LSTM. Can you give me some steps how did you build the network? That would help me a lot.
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-186716809.
@jeammimi thanks man.
Here is the code:
https://github.com/imodpasteur/keras.git
You must switch to the branch RecConv.
Here is one example of layer:
from keras.models import Sequential,Graph from keras.layers.convolutional import Convolution2D,Convolution3D from keras.layers.recurrent_convolutional import LSTMConv2D
seq = Sequential() seq.add(LSTMConv2D(nb_filter=5,nb_row=2,nb_col=2,input_shape=(10,40,40,1), border_mode="same",return_sequences=True)) seq.add(LSTMConv2D(nb_filter=5,nb_row=2,nb_col=2, border_mode="same",return_sequences=True)) seq.add(Convolution3D(nb_filter=1,kernel_dim1=1,kernel_dim2=2,kernel_dim3=2 ,activation='sigmoid', border_mode="same",dim_ordering="tf"))
seq.compile(loss="binary_crossentropy",optimizer="adadelta")
In this example the time dimension is 10 and the inital number of feature is one.
2016-02-23 12:50 GMT+01:00 jasper95 notifications@github.com:
@jeammimi https://github.com/jeammimi thanks man. Can you give me a link to that repo so i can have a pull request?
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-187668785.
thank you @jeammimi this is so awesome. I hope this will be part of the next release :smile:
@jeammimi can you teach me how to do encoder-decoder with this just like the model in the paper?
I don't follow the paper to use it.
What I do is to train it with input sequence = [x1,x2,x3,x4] and output sequence = [x2,x3,x4,x5]
Then for the prediction (pseudo-code) input = [x1,x2,x3,x4]
for j in range(10): #to predict 10 new step new_pos = seq.predict(input[newaxis,::,::,::,::])
input = np.concatenate((input,new),axis=0)
2016-02-25 12:41 GMT+01:00 jasper95 notifications@github.com:
@jeammimi https://github.com/jeammimi can you teach me how to do encoder-decoder with this just like the model in the paper?
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-188746621.
Sorry there was a mistake in my pseudocode:
input = [x1,x2,x3,x4]
for j in range(10): #to predict 10 new step
new_pos = seq.predict(input[newaxis,::,::,::,::])
new = new_pos[::,-1,::,::,::]
input = np.concatenate((input,new),axis=0)
2016-02-25 14:45 GMT+01:00 Jean-m. a. jeanmichel.arbona@gmail.com:
I don't follow the paper to use it.
What I do is to train it with input sequence = [x1,x2,x3,x4] and output sequence = [x2,x3,x4,x5]
Then for the prediction (pseudo-code) input = [x1,x2,x3,x4]
for j in range(10): #to predict 10 new step new_pos = seq.predict(input[newaxis,::,::,::,::])
input = np.concatenate((input,new),axis=0)
2016-02-25 12:41 GMT+01:00 jasper95 notifications@github.com:
@jeammimi https://github.com/jeammimi can you teach me how to do encoder-decoder with this just like the model in the paper?
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-188746621.
alright @jeammimi. Thanks but I still don't get it. Will you include example in your repo (in the examples folder) ?
What I implemented is only the LSTM conv part of the paper. I didn't implemented the encoder-decoder part. (I will try to see how to create a network that implement that) But you can still use the nework as a classical LSTM but for sequences of images. Here after is a full exemple. What the network does is to predict the next frame. So you can initiate it with some frames, then predict the next frame, then the next and so on...
from keras.models import Sequential,Graph from keras.layers.convolutional import Convolution2D,Convolution3D from keras.layers.recurrent_convolutional import LSTMConv2D
n_videos = 1200 length_video = 15 # frame number height = 40 width = 40 channel = 1
my_videos = np.zeros((n_videos, length_video, height, width, channel))
train = my_videos[::,:-1,::,::,::]
gt = my_videos[::,1:,::,::,::]
seq = Sequential() seq.add(LSTMConv2D(nb_filter=15, nb_row=3, nb_col=3, input_shape=(length_video, height, width, channel), border_mode="same",return_sequences=True))
seq.add(LSTMConv2D(nb_filter=15,nb_row=3, nb_col=3, border_mode="same", return_sequences=True))
seq.add(LSTMConv2D(nb_filter=15, nb_row=3, nb_col=3, border_mode="same", return_sequences=True))
seq.add(Convolution3D(nb_filter=1, kernel_dim1=1, kernel_dim2=3, kernel_dim3=3, activation='sigmoid', border_mode="same", dim_ordering="tf"))
seq.compile(loss="binary_crossentropy",optimizer="adadelta")
seq.fit(train[:1000],gt[:1000], batch_size=10, nb_epoch=1,validation_split=0.05)
#
track = train[1000:1010,:5,::,::,::]
for j in range(15): new_pos = seq.predict(track) new = new_pos[::,-1:,::,::,::]
track = np.concatenate((track,new),axis=1)
2016-02-26 1:09 GMT+01:00 jasper95 notifications@github.com:
alright @jeammimi https://github.com/jeammimi. Thanks but I still don't get it. Will you include example in your repo (in the examples folder) ?
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-189046519.
alright, Thank you again for this excellent example @jeammimi. I got it. Anyways, my problem is a spatio-temporal prediction of crime. My approach is to overlay NxN grid on a city map and create a snapshot of weekly crime occurrence on the grid cells. So 1 week is 1 training example (treated as "image" with nxn grids). My input is a series of weekly snapshot, and my output is prediction for next week of NxN 1/0 (indicating an occurrence of crime or not) matrix. Any advise selecting my hyper parameters would help me a lot. btw, I have 3 yrs of crime data.
Hi, you are welcome.
I am sorry but I don't have real practical experience with theses layers. In the original article there are some examples, so I would start from their parameters. But your project is very interesting, so good luck!
2016-02-27 3:38 GMT+01:00 jasper95 notifications@github.com:
alright, Thank you again for this excellent example @jeammimi https://github.com/jeammimi. I got it. Anyways, my problem is a spatio-temporal prediction of crime. My approach is to overlay NxN grid on a city map and create a snapshot of weekly crime occurrence on the grid cells. So 1 week is 1 training example (treated as "image" with nxn grids). My input is a series of weekly snapshot, and my output is prediction for next week of NxN 1/0 (indicating an occurrence of crime or not) matrix. Any advise how selecting my hyper parameters would help me a lot. btw, I have 3 yrs of crime data.
— Reply to this email directly or view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-189560243.
@jeammimi I've been looking on the docs about the border_mode='same' but I still don't get it. Can u explain it a bit? Does it mean you dont have to .add(ZeroPadding) layer?
Hi jeammimi, I have checked your implementation as might be very useful for me for a project I am working on; however, I would like to know if you have updates or are planning to, in order to upgrade the code to work with Keras 1.0. As the implementation of costume layers has changed, some parts of the structure in your code are not working with the latest version of Keras. Thank you.
I wasn't really planning to do it but if you are interested I could try to do it. It shouldn't be that hard to modify. have you looked into it?
2016-07-20 16:03 GMT+02:00 JuanOJ notifications@github.com:
Hi jeammimi, I have checked your implementation as might be very useful for me for a project I am working on; however, I would like to know if you have updates or are planning to, in order to upgrade the code to work with Keras 1.0. As the implementation of costume layers has changed, some parts of the structure in your code are not working with the latest version of Keras. Thank you.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-233958463, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQ4atsywZwovh59UVXzYaOfSgwpDrzOks5qXiqigaJpZM4HMhcp .
Oki, I did the changes but didn't test it extensively: It is in the branch recconvV1 of the https://github.com/imodpasteur/keras repository
2016-07-20 16:44 GMT+02:00 Jean-m. a. jeanmichel.arbona@gmail.com:
I wasn't really planning to do it but if you are interested I could try to do it. It shouldn't be that hard to modify. have you looked into it?
2016-07-20 16:03 GMT+02:00 JuanOJ notifications@github.com:
Hi jeammimi, I have checked your implementation as might be very useful for me for a project I am working on; however, I would like to know if you have updates or are planning to, in order to upgrade the code to work with Keras 1.0. As the implementation of costume layers has changed, some parts of the structure in your code are not working with the latest version of Keras. Thank you.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-233958463, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQ4atsywZwovh59UVXzYaOfSgwpDrzOks5qXiqigaJpZM4HMhcp .
Ok, Thank you! sorry I did not answer to your previous message, I was away for some days. I will check and let you know about how it works. Thank you
Hey Jeammimi, I would be very interested in seeing the code you used build the predictions of the moving squares at the top of the page. Any chance you could post it?
Jeff
Hello, I joined a notebook in the example folder: https://github.com/imodpasteur/keras/blob/recconvV1/examples/TestConv2DLSTM.ipynb
In this one I added some batchnormalisation, but a friend told me that when working with images , one should change the default parameter (I think it is the axis, to normalise according to the feature), but anyway, like that it also work. For the training, I think I trained it for 40 epoch, and maybe I reduced the learning rate at some point, but I don't remember well.
JM
2016-08-05 23:32 GMT+02:00 caleytown notifications@github.com:
Hey Jeammimi, I would be very interested in seeing the code you used build the predictions of the moving squares at the top of the page. Any chance you could post it?
Jeff
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-237971177, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQ4ajM17Vm1-uTdSBcLY2d1tHewBvaMks5qc6vmgaJpZM4HMhcp .
In your implementation, is there an argument which is the number of neurons in the lstm model?
What is "equivalent" to the number of neuron is the number of filter.
2016-12-26 9:24 GMT+01:00 Danlan Chen notifications@github.com:
In your implementation, is there an argument which is the number of neurons in the lstm model?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fchollet/keras/issues/1558#issuecomment-269182776, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQ4akG2M8aNdntGhuWilDFwuhb6BRULks5rL3nYgaJpZM4HMhcp .
Hi, I developped a recurrent convolutionnal layer according to LSTM Network: A Machine Learning Approach for Precipitation Nowcasting (http://arxiv.org/pdf/1506.04214v1.pdf) for keras.
Before doing a pull request I would have liked some advices: for the convolution part ,usually you can use or dim_ordering="tf" or "th" . I can't make "th" work. I tried to debug, but I don't know why it does not work.
To use this layer, it is also nice to have a time distributed version of the 2D convolution layer. I implemented one, but maybe it is too much to add two layers in the same pull?
I can't install tensorflow backend so I don't know if it works with it.
What are your suggestions?