Open hanxiao opened 6 years ago
Migrated from Disqus icemansina commented on 2017-08-17T09:08:42Z
Could you please kindly provide the whole ipython notebook for this tutorial? Thanks.
Migrated from Disqus Xitong YANG commented on 2017-08-22T23:41:30Z
Good post!
Migrated from Disqus Yunseong Hwang commented on 2017-09-07T03:41:29Z
It was really helpful!
Migrated from Disqus kaufmanuel commented on 2017-12-20T10:43:51Z
Thank you for posting this. One question: Why do you use the loop_state
variable explicitly? You could do exactly the same by just using output_ta
, right? Or is there some benefit by using the loop state?
Migrated from Disqus Han Xiao commented on 2017-12-22T23:02:59Z
loop_state
is stored in output_ta
. you can store anything you want at each step, e.g. sample, loss. These information will be stored in the output_ta
and returned as final_loop_state
(the 3rd value) from raw_rnn. See: https://www.tensorflow.org/api_docs/python/tf/nn/raw_rnn The output_ta
can be also a tuple, storing multiple interests.
On the other hand, emit_ta
is very restrictive. The size of Tensor must be cell.output_size.
Thank you for posting this. One question: Why do you use the loop_state variable explicitly? You could do exactly the same by just using output_ta, ri...
Migrated from Disqus kaufmanuel commented on 2017-12-23T16:11:28Z
I agree that loop_state is less restrictive than emit_ta. However, emit_ta also offers some flexibility. If you set emit_output to None in the initialization step in loop_fn, it is true that the expected size is cell.output_size, as you say. You can however set it to a custom size different from cell.output_size in the initialization step. You can see how this works in the implementation of raw_rnn. I found this very useful for my case. I did not do any sampling, but just wanted to grab the output of the dense layer directly, so I set emit_output to the output dimension of the dense layer.
loop_state is stored in output_ta. you can store anything you want at each step, e.g. sample, loss. These information will be stored in th...
Migrated from Disqus Han Xiao commented on 2017-12-25T18:52:46Z
Good insight! Merry xmas and happy new year & happy hacking!
I agree that loop_state is less restrictive than emit_ta....
life saver
https://hanxiao.github.io/2017/08/16/Why-I-use-raw-rnn-Instead-of-dynamic-rnn-in-Tensorflow-So-Should-You-0/