Closed ghost closed 7 years ago
The important bit in that stuff is the "IndexError: tuple index out of range" bit. I think you don't have the latest iisignature
(0.20, released 9 August 2017), which means you can't get derivatives with respect to the time delay thingy. Solutions: either set train_time_lapse=False
when you create the RecurrentSig
layer in demo_rnn.py
or upgrade to the latest version with pip install --upgrade iisignature
.
Hi Jeremy,
sorry for not including the latest version number in this issue. Yep, my bad with the latest version 0.20, installed with, pip install --upgrade iisignature
. It's all good to to go.
(py35_pytorch) ajay@ajay-h8-1170uk:~/PythonProjects/iisignature-master/examples$ python demo_rnn.py
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
recurrent_sig_1 (RecurrentSi (None, 5) 196.0
_________________________________________________________________
dense_1 (Dense) (None, 1) 6
=================================================================
Total params: 202
Trainable params: 202
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
2017-08-20 14:27:10.723206: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-20 14:27:10.723262: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-20 14:27:10.723272: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2000/2000 [==============================] - 0s - loss: 0.0510
Epoch 2/10
2000/2000 [==============================] - 0s - loss: 0.0461
Epoch 3/10
2000/2000 [==============================] - 0s - loss: 0.0440
Epoch 4/10
2000/2000 [==============================] - 0s - loss: 0.0426
Epoch 5/10
2000/2000 [==============================] - 0s - loss: 0.0415
Epoch 6/10
2000/2000 [==============================] - 0s - loss: 0.0404
Epoch 7/10
2000/2000 [==============================] - 0s - loss: 0.0393
Epoch 8/10
2000/2000 [==============================] - 0s - loss: 0.0380
Epoch 9/10
2000/2000 [==============================] - 0s - loss: 0.0366
Epoch 10/10
2000/2000 [==============================] - 0s - loss: 0.0348
0.0333910922002
Thank you very much for the help!
I haven't seen much pytorch code, so I don't know the elegant way to write stuff, but I have just checked in an approximate pytorch equivalent of the keras recurrent example.
I don't claim that the recurrent structure which these examples show is a good performing recurrent layer for any real problems. It was just a simple idea. I am interested in general in ideas for improving RNNs using signatures, and have other ideas not released, and am happy to chat about this type of research. This example code is just the first idea I coded up.
WOW - how did you do that so quickly !!!
(py35_pytorch) ajay@ajay-h8-1170uk:~/PythonProjects/iisignature-master/examples$ python demo_rnn_torch.py
0.3396
[torch.FloatTensor of size 1]
Looks good to me :+1:
May I email you to chat about your interests regarding applications of RNNs with the signature method? What's your preferred email or method of communication - please feel free to drop me a line ajaytalati@googlemail.com, if you don't want to post it here. Happy to meet up if you're in London still?
At the moment I'm working on a GP+RNN pipeline, so I'm looking at alternatives to GP modelling, or enhancements to it? In particular data efficiency, unsupervised learning and adversarial training are the things that are relevant. Just wondered what you initial thoughts were on this?
Thanks a lot, best regards,
Ajay
Hi Jeremy,
thanks very much for sharing this repository - it's very exciting!
I just tried to run
demo_rnn.py
, and got a big blob of an error (and a similar error when I try to rundemo_keras.py
).It definitely is something to do with the TF graph, as it makes it to line 57,
PS - I normally use PyTorch, and I was wondering how difficult it would be to write a signature RNN module for it?
Thanks for your help, Aj