LxMLS / lxmls-guide

Lisbon Machine Learning Summer School Lab Guide
81 stars 61 forks source link

Day 0 #74

Closed samiroid closed 5 years ago

samiroid commented 9 years ago

The exercises 0.10 and 0.14 seem to be the same (Galton dataset) Also the exercises 0.12 and 0.13 could merged. Is there any reason to have two slightly different implementations of the gradient descent? (pag. 23 and pag. 25).

Last year there were some problems when using iPython to solve the exercises because it is necessary to reload the modules whenever a change is made. I found here that we can avoid this by running these "ipython magic" commands: _%loadext autoreload and %autoreload 2. Maybe we could add this to guide.

ChristopherBrix commented 5 years ago

The exercises 0.10 and 0.14 seem to be the same (Galton dataset)

This has already been resolved

Also the exercises 0.12 and 0.13 could merged. Is there any reason to have two slightly different implementations of the gradient descent? (pag. 23 and pag. 25).

Yes, this is confusing. I will try to merge them.

Last year there were some problems when using iPython to solve the exercises because it is necessary to reload the modules whenever a change is made. I found here that we can avoid this by running these "ipython magic" commands: _%loadext autoreload and %autoreload 2. Maybe we could add this to guide.

We should definitely do that. This was still an issue last year.

ChristopherBrix commented 5 years ago

On second thought, the first gradient algorithm deals with a function on scalars, the second one with a two-dimensional function which requires two inputs (x and y). One could define an abstract algorithm that can handle functions with arbitrary input, but this is probably too confusing at this point.

So maybe it would be better to keep these two algorithms separate, and just make them more similar (and maybe highlight their difference).

ChristopherBrix commented 5 years ago

This was resolved as part of PR https://github.com/LxMLS/lxmls-toolkit/pull/146