Right now we only have an overview of hyper parameters where the reference implementation of the authors differ from paper. I wonder if we should put a table of every hyper parameter together with their values on top of the API reference. This way we could give a far better overview
If we go for this it might be viable to implement a HyperParameters class for each paper where we can pull all default values for all the functions and classes.
That's a good idea. It would also be good to document other paper relevant attitudes in this list? For example, the optimizer used, the dataseton which the training is/was performed, etc
Right now we only have an overview of hyper parameters where the reference implementation of the authors differ from paper. I wonder if we should put a table of every hyper parameter together with their values on top of the API reference. This way we could give a far better overview
If we go for this it might be viable to implement a
HyperParameters
class for each paper where we can pull all default values for all the functions and classes.Cc @jbueltemeier