While completing the SVM task of assignment one, I noticed an irregularity which I hope was not intended (if so, it is quite evil!).
Note that in the below screenshot, to the right of the screen I've forgotten to add the loss and yet still the difference is zero (making me think I've implemented loss correctly):
And now properly calculating the loss:
also has difference zero as a sanity check. I begin to see a difference when the regularization is 0.005, but this is much larger than the current value of 0.000005, so of course there may be unintended consequences of increasing the regularization parameter this much (as yet untested).
While completing the SVM task of assignment one, I noticed an irregularity which I hope was not intended (if so, it is quite evil!).
Note that in the below screenshot, to the right of the screen I've forgotten to add the loss and yet still the difference is zero (making me think I've implemented loss correctly):
And now properly calculating the loss:
also has difference zero as a sanity check. I begin to see a difference when the regularization is 0.005, but this is much larger than the current value of 0.000005, so of course there may be unintended consequences of increasing the regularization parameter this much (as yet untested).