Open Evizero opened 8 years ago
Would that make softmax Multivariate{LogitMarginLoss}
? Implementing that seems like a reasonable place to start. If I'm not mistaken it would look a bit like this?
function value{T<:Number}(::Softmax, target::Int, output::AbstractVector{T})
return logsumexp(output) - output[target]
end
We should have MLDataUtils
or somewhere define log-sum-exp with the standard trick:
"""
logsumexp(x)
Computes `log(sum(exp(x)))` of a vector `x` in a numerically stable manner
"""
function logsumexp{T<:Number}(x::AbstractVector{T})
m = maximum(x) # subtracting m prevents overflow
sumexp = zero(T)
for i in eachindex(x)
sumexp += exp(x[i]-m)
end
return log(sumexp) + m
end
All of this seems a bit different (incompatible?) from what you had in mind for the multivariate hinge loss, could you expand on your thoughts there?
Actually, that sounds like a good idea. The multinomial loss doesn't require the outputs to be produced with sigmoid(..)
as far as I know, so it should work with a linear or affine prediction function.
We should have MLDataUtils or somewhere define log-sum-exp with the standard trick
If it is not in StatsBase yet, then we should define it in LearnBase in my opinion
If it is not in StatsBase yet, then we should define it in LearnBase in my opinion
Found it!
We could also consider a shorter Mv{L1HingeLoss}
, but maybe 2 letters is cutting it a bit short
Partially done. For distance based losses this can now be achieved with the average modes. No support for multinomial classification losses yet, though
https://github.com/madeleineudell/LowRankModels.jl has implementations of several multivariate loss functions for ordinal and categorical variables (OvALoss, BvSLoss, etc.). Those implementations should probably be moved over here since they're more versatile than only being used in one package.
I'd like to start implementing the multi-category loss functions like MultinomialLoss, Multiclass SVM, and One-vs-all, but I'm not sure what the convention should be or how they play with LabelEncodings (i.e. output is some kind of vector, but is the target something encoded in a one-hot scheme or an index?).
LowRankModels uses something akin to the Indices encoding scheme for targets, but I think this would be a productive discussion to have.
cc: @madeleineudell
@mihirparadkar @kvmanohar22 any progress regarding the multi class losses? I am currently needing them for a research paper. Can work on it right away if you guys allow.
One bridge we have to cross sooner or later are multiclass problems. There are mutliclass extensions or formulations for a couple of losses that we have.
A particular interesting example is the hinge loss. In a multinomial setting the targets could be indices, such as
[1,2,3,1]
, which would violate our current idea of atargetdomain
being[-1,1]
.One solution could be to think of the multivariate version as separate of the binary one. For example we could lift it like this:
Multivariate{L1HingeLoss}
. This could then have it's owntargetdomain
and other properties. It would also avoid potential ambiguities when it comes to dispatching on the typestargets
andoutput
. I am not sure we could be certain of dealing with a multivariate vs binary case just based on the parameter types