Open ibayer opened 8 years ago
I'm seeing some discrepancies between libfm and fastfm with movielens. Before diving into my observations can you confirm the following are equivalent:
/libFM -train ml1m_train.svml -test ml1m_test.svml -task r -dim '1,1,10' -iter 1000 -method mcmc
X_train, y_train = load_svmlight_file("ml1m_train.svml")
X_test, y_test = load_svmlight_file("ml1m_test.svml")
fm = mcmc.FMRegression(n_iter=0, rank=10)
fm.fit_predict(X_train, y_train, X_test)
for i in range(1000):
y_pred = fm.fit_predict(X_train, y_train, X_test, n_more_iter=1)
y_pred[y_pred > 5] = 5
y_pred[y_pred < 1] = 1
print(i, np.sqrt(mean_squared_error(y_pred, y_test)))
I don't see a difference, but please check that the init_stdev
parameter is the same.
Please have a look at my second comment above, this could explain your observation for small differences.
Hi, Immanuel! I was comparing different LibFM implementations (I was testing MCMC for LibFM and FastFM in particular).
Unfortunately, the results of fastFM are not super optimistic http://arogozhnikov.github.io/2016/02/15/TestingLibFM.html
Then I found this topic, so honestly I wasn't thinking about clipping values. This trick may give some improvement in regression, but LibFM also easily wins in classification.
Maybe you know a reason? FastFM uses different priors or something else?
@arogozhnikov BTW, you need standardization especially for pyFM https://github.com/coreylynch/pyFM/issues/3#issuecomment-99513662
@chezou all the features are dummy (0-1) and table should be sparse. No, for the tests I am running this step neither needed nor possible.
@arogozhnikov Great comparison, I have a few suggestions that could make the evaluation even more useful for other people.
As is, I'm not convinced that libFM is faster and performs better then fastFM for MCMC regression. I have done less comprehensive comparisons for MCMC classification but the algorithm / prior should be the same in both libraries. I would be interested to look into it if you can clearly show that libFM dominates fastFM systematically for MCMC classification.
@ibayer Thanks for comments.
@arogozhnikov It's possible to use a random seed with libFM.
"seed", "integer value, default=None"
https://github.com/srendle/libfm/blob/master/src/libfm/libfm.cpp#L93
Why do I get better results with libfm?
Be careful if you use a regression model with a categorical target, such as the 1-5 star rating of the movielens dataset.
libfm automatically clips the prediction values to the higest / lowest value in the training data. This make sense if you predict ratings with a regression model and evaluate with RMSE.
For example, it's certainly better to predict a 5 star rating if the regression score is > 5 then the regression value. With fastFM you have to do the clipping yourself, because clipping is not always a good idea.
But it's easy to do if you need it.
Why do I not get exactly the same results with fastFM as with libFM?
FMs are non-linear models that use random initialization. This means that the solver might end up in a different local optima if the initialization changes. We can use a random seed in fastFM to make individual runs comparable, but that doesn't help if you compare results between different implementations. You should therefore always expect small differences between fastFM and libFM predictions.