Closed ivikash closed 8 years ago
@mstampfer ?
Hi Vikash,
Happy to know you're using and enjoying the python implementation!
Yes this is a potentially issue. The python code doesn't follow an exact line-for-line implementation of the Octave code. gradientFunction.py returns the gradient and costFunction.py returns the cost function so its split apart in two files instead of in one (costFunction.m) file. This was done because costFunction.py is imported into other exercises and it simplifies things to split them apart.
Yes gradientFunction.py should return the full gradient (all terms) not the partial gradient. You can use vector notation to calculate this in one line (using the dot() function in numpy).
Hope this helps.
P.S. can you please use the Gitter-chat link on the github https://github.com/mstampfer/Coursera-Stanford-ML-Python page in order to post questions? That way others can benefit from them and as I follow this code on Gitter, I'm alerted quicker about them.
Marcel Stampfer mstampfer@axonconsulting.com +44 (0)777 568-1806
On 31 August 2016 at 17:34, Vikash Agrawal notifications@github.com wrote:
Hi,
First of all, thank - you for this amazing repo and hard work 👍
I am facing issues in submitting Exercise 2 -- Week 3. The cost function looks fine and is giving the right output too. Also, In the gradientFunction, do we have to return the partial derivative or the final gradient descent. When I look at octave files, the cost function returns both J and grad [partial derivative] which is used by fminunc to get the final gradient descent. And nothing is getting submitted Can you please help. I am already overdue 😭
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mstampfer/Coursera-Stanford-ML-Python/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/AC4JkmgF_F3qBek5qjVpjUG5wMnezYpGks5qla0YgaJpZM4Jx0EY .
Added comment to ex2.py to clarify reason for difference between Ocatave and Python versions in ex2.
Thanks. Were you able to submit solutions with this?
Yes it submits correctly. After setting the PYTHONPATH and making sure you have the correct token you should see a message like the one shown below.
Use token from last successful submission (coursera@axonconsulting.com)?
== Part Name | Score | Feedback == --------- | ----- | -------- == Sigmoid Function | 5 / 5 | Nice work! == Logistic Regression Cost | 30 / 30 | Nice work! == Logistic Regression Gradient | 30 / 30 | Nice work! == Predict | 5 / 5 | Nice work! == Regularized Logistic Regression Cost | 15 / 15 | Nice work! == Regularized Logistic Regression Gradient | 15 / 15 | Nice work! == -------------------------------- == | 100 / 100 |
Are you getting a traceback?
Marcel Stampfer mstampfer@axonconsulting.com +44 (0)777 568-1806
On 1 September 2016 at 14:37, Vikash Agrawal notifications@github.com wrote:
Thanks. Were you able to submit solutions with this?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/mstampfer/Coursera-Stanford-ML-Python/issues/6#issuecomment-244080731, or mute the thread https://github.com/notifications/unsubscribe-auth/AC4JknhihQFwn8oeonu5QQs3ACmMhpVgks5qltUOgaJpZM4Jx0EY .
Woot! Finally 👍
==
== Submitting Solutions | Programming Exercise logistic-regression
==
Use token from last successful submission? (Y/n):
==
== Part Name | Score | Feedback
== --------- | ----- | --------
== Sigmoid Function | 5 / 5 | Nice work!
== Logistic Regression Cost | 30 / 30 | Nice work!
== Logistic Regression Gradient | 30 / 30 | Nice work!
== Predict | 5 / 5 | Nice work!
== Regularized Logistic Regression Cost | 15 / 15 | Nice work!
== Regularized Logistic Regression Gradient | 15 / 15 | Nice work!
== --------------------------------
== | 100 / 100 |
==
I am really counting on this repo for my entire course and submissions 👍
I can confirm that all of the exercises submit correctly (with the correct solution code in place!). I've also added some additional exception handling and logging which might be useful. You might want to pull these changes in order to take advantage of them.
Marcel Stampfer mstampfer@axonconsulting.com +44 (0)777 568-1806
On 1 September 2016 at 19:21, Vikash Agrawal notifications@github.com wrote:
Woot! Finally 👍
== Submitting Solutions | Programming Exercise logistic-regression
Use token from last successful submission (vikashagrawal1990@gmail.com)? (Y/n):
== Part Name | Score | Feedback == --------- | ----- | -------- == Sigmoid Function | 5 / 5 | Nice work! == Logistic Regression Cost | 30 / 30 | Nice work! == Logistic Regression Gradient | 30 / 30 | Nice work! == Predict | 5 / 5 | Nice work! == Regularized Logistic Regression Cost | 15 / 15 | Nice work! == Regularized Logistic Regression Gradient | 15 / 15 | Nice work! == -------------------------------- == | 100 / 100 |
I am really counting on this repo for my entire course and submissions 👍
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/mstampfer/Coursera-Stanford-ML-Python/issues/6#issuecomment-244166763, or mute the thread https://github.com/notifications/unsubscribe-auth/AC4JklydzfCla6580JDKaVZ6Du6e4DSFks5qlxeggaJpZM4Jx0EY .
Hi,
First of all, thank - you for this amazing repo and hard work 👍
I am facing issues in submitting Exercise 2 -- Week 3. The cost function looks fine and is giving the right output too. Also, In the gradientFunction, do we have to return the partial derivative or the final gradient descent. When I look at octave files, the cost function returns both
J
andgrad
[partial derivative] which is used by fminunc to get the final gradient descent. And nothing is getting submitted Can you please help. I am already overdue 😭