Closed DOH-WLD0303 closed 2 years ago
that's very interesting, for my implementation, i get the same results as what is expected in the print statements
Cost : 2.534819
Expected cost: 2.534819
-----------------------
Gradients:
[0.146561, -0.548558, 0.724722, 1.398003]
Expected gradients:
[0.146561, -0.548558, 0.724722, 1.398003]
it could be that you have done the unregularised version (which is indeed what is asked of you) and already tried to run the test case? notice that section 1.3.3 asks you to go back and change it to the regularised version before running the test case
can you show what values you get for the example values they use in the graded submission?
X_test = np.stack([np.ones(20),
np.exp(1) * np.sin(np.arange(1, 21)),
np.exp(0.5) * np.cos(np.arange(1, 21))], axis=1)
y_test = (np.sin(X_test[:, 0] + X_test[:, 1]) > 0).astype(float)
J, grad = lrCostFunction(np.array([0.25, 0.5, -0.5]), X_test, y_test, 0.1)
print('Cost : {:.6f}'.format(J))
print('-----------------------')
print('Gradients:')
print(' [{:.6f}, {:.6f}, {:.6f}]'.format(*grad))
@ackl, thanks for commenting. Strangely enough today when I came back to it, my results came out as expected for the given test after the lrCostFunction. Something must have gotten changed in my notebook environment by accident that a restart corrected. Notebooks are great but it's times like these that are always frustrating 😅
Thanks again for taking the time to comment!
Hey folks,
I've encountered an issue with the initial test of the regularized logistic regression cost function (lrCostFunction). The cell that defines the test below has the wrong values. My implementation of the lrCostFunction appears to be correct and comes back correct when sent to the grader.
The existing code is below:
This should be (based on my submission to the official grader)