Closed turmeric-blend closed 3 years ago
Yes, I confirm it. It is designed this way so that one can take any Loss
from deepdow.losses
and directly use it for gradient descent (without worrying about the sign).
I would encourage you to check the API documentation (= docstrings of the classes) where it is mentioned what the quantity represents.
https://deepdow.readthedocs.io/en/latest/source/api/deepdow.losses.html#deepdow.losses.SharpeRatio
Negative Sharpe ratio.
Negative cumulative returns.
Standard deviation.
okay thanks, cause I was worried that when evaluating in Run
, there might have been a part which I missed where you inverted the losses back to the original concept (eg Maximise Sharpe Ratio), such that the metric plot was read as higher Sharpe Ratio the better, thanks for clarification!
Hi
deepdow
's loss functions are implemented in such a way thatI just want to clarify that since metrics always use the same loss functions during evaluation, it would follow the same convention the lower the metric the better?
I just want to confirm this because I kept reading
CumulativeReturn
,SharpeRatio
as higher=better andStandardDeviation
as lower=better. So instead its actuallyCumulativeReturn
,SharpeRatio
andStandardDeviation
lower=better?