在课程中的Inputs那一节有一句话不理解“These models work a lot better if we feed the decoder our target sequence regardless of what its timesteps actually output in the training run.”。
(为什么要feed decoder我们的target sequence呢?,是为了进行验证吗?)
“So unlike in the inference graph, we will not feed the output of the decoder to itself in the next timestep.”为什么在decoder阶段就不feed the output to itself 呢?decoder也是一个RNN cell啊,所以应该会有循环才对,那么肯定也会feed the output to itself呀。
这句话应该是在项目中也有出现过,“The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).”
P3 Project的batches问题
Seq 2 Seq