Closed HaeminJung closed 3 years ago
The screenshot suggests you are editing the code and recompiling it, were you getting that error with the code untouched?
If you have been changing it, feel free to push it up to your fork as a new branch and I'll have a look.
The original code also produced the same error. I was just messing with the iterration number and printf's to try debug it with ease.
I am taking a look under the hood with gdb but help would be much appreciated :)
I encountered the same issue. Here's the culprit:
--- a/neural_network.c
+++ b/neural_network.c
@@ -120,6 +120,7 @@ float neural_network_training_step(mnist_dataset_t * dataset, neural_network_t *
memset(&gradient, 0, sizeof(neural_network_gradient_t));
// Calculate the gradient and the loss by looping through the training set
for (i = 0, total_loss = 0; i < dataset->size; i++) {
total_loss += neural_network_gradient_update(&dataset->images[i], network, &gradient, dataset->labels[i]);
}
@@ -128,7 +129,7 @@ float neural_network_training_step(mnist_dataset_t * dataset, neural_network_t *
for (i = 0; i < MNIST_LABELS; i++) {
network->b[i] -= learning_rate * gradient.b_grad[i] / ((float) dataset->size);
- for (j = 0; j < MNIST_IMAGE_SIZE + 1; j++) {
+ for (j = 0; j < MNIST_IMAGE_SIZE; j++) {
network->W[i][j] -= learning_rate * gradient.W_grad[i][j] / ((float) dataset->size);
}
}
Thanks for the report and the patch!
Hi Andrew.
I am getting a Stack Smashing Error from gcc.
The code does everything as it should, but does not terminate as it should. I have figured out the the code is not escaping correctly from the training for loop.
This issue may particularly due to my devices setting(Ubuntu18.04.2).