jankrepl / deepdow

Portfolio optimization with deep learning.
https://deepdow.readthedocs.io
Apache License 2.0
903 stars 138 forks source link

Clarification: lower the metric value the better? #104

Closed turmeric-blend closed 3 years ago

turmeric-blend commented 3 years ago

Hi deepdow's loss functions are implemented in such a way that

the lower the value of the loss the better

I just want to clarify that since metrics always use the same loss functions during evaluation, it would follow the same convention the lower the metric the better?

I just want to confirm this because I kept reading CumulativeReturn, SharpeRatio as higher=better and StandardDeviation as lower=better. So instead its actually CumulativeReturn, SharpeRatio and StandardDeviation lower=better?

jankrepl commented 3 years ago

Yes, I confirm it. It is designed this way so that one can take any Loss from deepdow.losses and directly use it for gradient descent (without worrying about the sign).

I would encourage you to check the API documentation (= docstrings of the classes) where it is mentioned what the quantity represents.

https://deepdow.readthedocs.io/en/latest/source/api/deepdow.losses.html#deepdow.losses.SharpeRatio

Negative Sharpe ratio.

https://deepdow.readthedocs.io/en/latest/source/api/deepdow.losses.html#deepdow.losses.CumulativeReturn

Negative cumulative returns.

https://deepdow.readthedocs.io/en/latest/source/api/deepdow.losses.html#deepdow.losses.StandardDeviation

Standard deviation.

turmeric-blend commented 3 years ago

okay thanks, cause I was worried that when evaluating in Run, there might have been a part which I missed where you inverted the losses back to the original concept (eg Maximise Sharpe Ratio), such that the metric plot was read as higher Sharpe Ratio the better, thanks for clarification!