Open ylwu-amzn opened 2 years ago
The SGD algorithm names have a few typos, there's no algorithm called ADA
and RMS_DROP
should be RMS_PROP
.
Thanks @Craigacp , will change the typo.
The ADA
is. not supported for SGD? Can you also help review this code https://github.com/opensearch-project/ml-commons/blob/main/ml-algorithms/src/main/java/org/opensearch/ml/engine/algorithms/regression/LinearRegression.java#L122-L144?
In this sentence
The model supports the linear optimizer in training, including popular approaches like Linear Decay, SQRT_DECAY, [ADA](http://chrome-extension//gphandlahdpffmccakmbngmbjnjiiahp/https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf), [ADAM](https://tribuo.org/learn/4.1/javadoc/org/tribuo/math/optimisers/Adam.html), and [RMS_DROP](https://tribuo.org/learn/4.1/javadoc/org/tribuo/math/optimisers/RMSProp.html).
ADA
should be ADAGRAD
and the link to the Adagrad paper is broken. I think the sentence should probably be:
The model supports standard gradient optimizers like SGD with Momentum, AdaGrad, Adam & RMSProp.
with appropriate links to the literature.
We basically mirrored the useful optimizers out of pytorch/TensorFlow into Tribuo to provide it's gradient optimizers. We've not added any for a few years, but there aren't really any new ones that are worth adding for the models we train with gradient descent.
Thanks @Craigacp for your suggestion. Will update the doc.
Now we have document for APIs https://opensearch.org/docs/latest/ml-commons-plugin/api/ We need to add document to explain what algorithm we support and each algorithm's parameter, function, how to use etc.