ageron / handson-ml2

A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
Apache License 2.0
27.91k stars 12.76k forks source link

Chapter 4 Exercise 12 - Math Latex display #93

Closed vidyabhandary closed 4 years ago

vidyabhandary commented 4 years ago

In Chapter 4 - Exercise: Batch Gradient Descent with early stopping for Softmax Regression, the math equations are not getting displayed correctly. I checked in Chrome, Edge and Firefox.

$J(\mathbf{\Theta}) =

\dfrac{1}{m}\sum\limits{i=1}^{m}\sum\limits{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$

And the equation for the gradients:

$\nabla{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$


I think it can be corrected with the following change


So the equations we will need are the cost function:

$$J(\mathbf{\Theta}) = \dfrac{1}{m}\sum\limits{i=1}^{m}\sum\limits{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$$

And the equation for the gradients:

$$\nabla{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$


Thank you for your awesome book.

vidyabhandary commented 4 years ago

Realized the display seems to be mangled only when browsing the GitHub page - it gets displayed via the jupyter environment correctly.

ageron commented 4 years ago

Hi @vidyabhandary ,

Thanks for your feedback, and thanks as well for your very kind words, I'm glad you enjoy my book! :)

Indeed, github's notebook viewer is not great when it comes to displaying equations. I don't know what they're doing under the hood, but it's not working. That's why I have this warning on the repo's home page:

image

Pretty much any other viewer than github's should display things correctly, as you've noticed. You can use Colab, or Jupyter.org's viewer, etc.

Hope this helps.