Closed xinsuinizhuan closed 4 years ago
I simply do that because once the neural network has seen most of the image it should have been able to guess the value correctly.
Supposedly it should help the neural network train, but it isn't a requirement nor is it necessary.
I am puzzle: 1、in mnist_test_recurrent example, you slice the input img to four parts, then input to forward, but you input the data 1/4 by one times, and input four times, then two network.back_propagation(outputs[j]), why back_propagation two times? int img_partitions = 4; for (int i = 0; i < epochs; ++i){ std::cout << " current epoch: " << i << std::endl; for (int j = 0; j < samples/batch_size; j++) { for (int p = 0; p < img_partitions; ++p) { auto batch = inputs[j]; auto index = BC::index(0,784 * (p/(float)img_partitions)); auto shape = BC::shape(784/4, batch_size);
2、when test, in order to get the mat hyps input the before 3/4 into, then input the 3/4 into to get the hyps, it is different to the step one information.
3、I am so puzzle, when i use the lstm to forcast, whether i should slice the img to such parts? Is it could be simple as the mnist_text example?