Open hunkim opened 8 years ago
In def create_batches(self) (util.py):
ydata[:-1] = xdata[1:] ydata[-1] = xdata[0]
The first line is fair enough. However, why we need the second line? Say our data is "Hello", then
x = "hello" y="elloh"
So when h is given we expect e (h->e), e->l, etc. But why o->h (ydata[-1] = xdata[0])? Perhaps this hurts the training model.
Did I miss something here? Or you think this is only one char, so we ignore?
In def create_batches(self) (util.py):
The first line is fair enough. However, why we need the second line? Say our data is "Hello", then
x = "hello" y="elloh"
So when h is given we expect e (h->e), e->l, etc. But why o->h (ydata[-1] = xdata[0])? Perhaps this hurts the training model.
Did I miss something here? Or you think this is only one char, so we ignore?