Closed ahwillia closed 9 years ago
Just a quick update on this. This seems to be a more general bug -- not just specific to Kmeans. If you try to fit a quadratically-regularized PCA with k=3
, for example, you will get something like the following for X
:
-1.41771 0.141045 0.371837 … 1.43168 0.188093 -0.0554744
-0.284602 -0.363358 0.436215 0.969497 -0.551486 0.0556582
1.0 1.0 1.0 1.0 1.0 1.0
Obviously, the last row is not supposed to be all ones...
The reason for this is that the default arguments to GLRM were (incorrectly) changed in an earlier PR. (Right now it's automatically adding an offset and scaling the losses.) I'm changing them back now and will push a patch shortly.
Ok! This should be closed by 3c7cfd91e351ca52a3a9c4ef5125f591dc21d8a1
Edit: Reverting back to this commit fixes things: https://github.com/madeleineudell/LowRankModels.jl/commit/ed9e68064a0c32b4686a5dd5d45fc578ef39a4c4
I'm not sure when this happened, but
unitonesparse()
regularizer doesn't seem to be working correctly. All the columns ofX
have two nonzero elements (both equal to1.0
).Interestingly, the objective function is not infinite:
Yet the
evaluate
function correctly returnsInf
when a column of X is passed with the appropriate regularizer: