Closed vivanov879 closed 7 years ago
/usr/bin/python3.5 /home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py Compiling RNN... DONE! Compiling cost functions... DONE! /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " Calculating gradients... DONE! Initializing variables... DONE! Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1021, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1003, in _run_fn status, run_metadata) File "/usr/lib/python3.5/contextlib.py", line 66, in exit next(self.gen) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [320,1] vs. shape[1] = [32,100] [[Node: 0/RNN/while/PhasedLSTMCell/concat = Concat[N=2, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](0/RNN/while/PhasedLSTMCell/concat/concat_dim, 0/RNN/while/PhasedLSTMCell/Slice_1, 0/RNN/while/Identity_3)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py", line 276, in
Caused by op '0/RNN/while/PhasedLSTMCell/concat', defined at:
File "/home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py", line 276, in
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [320,1] vs. shape[1] = [32,100] [[Node: 0/RNN/while/PhasedLSTMCell/concat = Concat[N=2, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](0/RNN/while/PhasedLSTMCell/concat/concat_dim, 0/RNN/while/PhasedLSTMCell/Slice_1, 0/RNN/while/Identity_3)]]
now it remembers states between runs
i tracked the bug in my code -- it's wrong shapes during test -- will fix now
different batch size during test -- fixed that -- now testing
I want it to remember states like an ordinary multilayer LSTM would. I keep each layer's state in a list. Now PLSTM uses previous end state as initial states of the current run
/usr/bin/python3.5 /home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py Compiling RNN... DONE! Compiling cost functions... DONE! /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " Calculating gradients... DONE! Initializing variables... DONE! +-----------+----------+------------+ | Epoch=0 | Cost | Accuracy | +===========+==========+============+ | Train | 0.354964 | 0.832031 | +-----------+----------+------------+ | Test | 0.137772 | 0.96875 | +-----------+----------+------------+
here's a full run -- it works
i wonder how to keep track of initial_states in a tensorflow's summary -- can you please help me with that? That would prove it works correctly
+-----------+----------+------------+ | Epoch=0 | Cost | Accuracy | +===========+==========+============+ | Train | 0.354964 | 0.832031 | +-----------+----------+------------+ | Test | 0.137772 | 0.96875 | +-----------+----------+------------+ +-----------+-----------+------------+ | Epoch=1 | Cost | Accuracy | +===========+===========+============+ | Train | 0.137392 | 0.956641 | +-----------+-----------+------------+ | Test | 0.0635801 | 0.96875 | +-----------+-----------+------------+
Hey! Very nice job, Can we make the initial state argument optional? I'd like to keep it such that people can also not specify the initial state.
Sure. I'll do it in an hour. Just a flag that will reset it on every iteration 23.12.2016, 131:03 ПП, "Enea Ceolini" notifications@github.com:Hey! Very nice job, Can we make the initial state argument optional? I'd like to keep it such that people can also not specify the initial state.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or mute the thread.
Yeah actually I can do it now! I'll merge, update with this flag and add the summary for the initial state.
that's great -- looking forward to try it out
trying out the change -- had an error right in the end oftraining