ddbourgin / numpy-ml

Machine learning, in numpy
https://numpy-ml.readthedocs.io/
GNU General Public License v3.0
15.35k stars 3.72k forks source link

Feature add regularizer base class #21

Open WuZhuoran opened 5 years ago

WuZhuoran commented 5 years ago

This pull request closes #20 .

- What I did

  1. Implement Regularizer Class and Basic documentation.

- How I did it

  1. Refer Keras way of Regularizer.

- How to verify it

  1. Currently I did not implement test against regularizer.

This pull request adds a new feature to Numpy-ml. Ask @ddbourgin to take a look.

ddbourgin commented 5 years ago

Sorry to let this sit - I need to think about the best way to include regularization on a per-layer basis. We'll need the loss objects to be able to access the regularization penalties at each layer and then add it to their formulation. We'll also need to adjust the appropriate layer gradients during each stage of backprop. This isn't particularly bad, we just need to make sure that the proper book-keeping is in place. I'm hoping I'll have some time later in the week to work on this.

Also, a general comment: I'd prefer that we avoid directly copying Keras / Torch / tf code or documentation when possible (I realize that for very simple functions like these, there's really only a single way to write them, so obviously use your discretion). While it's fine (and in fact, encouraged) that we compare our implementations against these gold-standards, I think we should focus on trying to do our own work when it comes to implementing / documenting the behavior of the algorithms. A big goal of the project is to supplement packages like Keras by providing more explicit / transparent discussion and documentations of the algorithms.

WuZhuoran commented 5 years ago

That's cool. Except for some simple functions. If i want to update a new major feature, we need to discuss before coding. If you have any suggestions or instructions, please let me know.