ibayer / fastFM

fastFM: A Library for Factorization Machines
http://ibayer.github.io/fastFM
Other
1.07k stars 206 forks source link

Why do I get better results with libfm? #28

Open ibayer opened 8 years ago

ibayer commented 8 years ago

Why do I get better results with libfm?

Be careful if you use a regression model with a categorical target, such as the 1-5 star rating of the movielens dataset.

libfm automatically clips the prediction values to the higest / lowest value in the training data. This make sense if you predict ratings with a regression model and evaluate with RMSE.

For example, it's certainly better to predict a 5 star rating if the regression score is > 5 then the regression value. With fastFM you have to do the clipping yourself, because clipping is not always a good idea.

But it's easy to do if you need it.

    # clip values                                                    
    y_pred[y_pred > y_true.max()] = y_true.max()                        
    y_pred[y_pred < y_true.min()] = y_true.min()

Why do I not get exactly the same results with fastFM as with libFM?

FMs are non-linear models that use random initialization. This means that the solver might end up in a different local optima if the initialization changes. We can use a random seed in fastFM to make individual runs comparable, but that doesn't help if you compare results between different implementations. You should therefore always expect small differences between fastFM and libFM predictions.

merrellb commented 8 years ago

I'm seeing some discrepancies between libfm and fastfm with movielens. Before diving into my observations can you confirm the following are equivalent:

/libFM -train ml1m_train.svml -test ml1m_test.svml -task r -dim '1,1,10' -iter 1000 -method mcmc
X_train, y_train = load_svmlight_file("ml1m_train.svml")
X_test, y_test = load_svmlight_file("ml1m_test.svml")

fm = mcmc.FMRegression(n_iter=0, rank=10)
fm.fit_predict(X_train, y_train, X_test)
for i in range(1000):
    y_pred = fm.fit_predict(X_train, y_train, X_test, n_more_iter=1)
    y_pred[y_pred > 5] = 5
    y_pred[y_pred < 1] = 1
    print(i, np.sqrt(mean_squared_error(y_pred, y_test)))
ibayer commented 8 years ago

I don't see a difference, but please check that the init_stdev parameter is the same. Please have a look at my second comment above, this could explain your observation for small differences.

arogozhnikov commented 8 years ago

Hi, Immanuel! I was comparing different LibFM implementations (I was testing MCMC for LibFM and FastFM in particular).

Unfortunately, the results of fastFM are not super optimistic http://arogozhnikov.github.io/2016/02/15/TestingLibFM.html

Then I found this topic, so honestly I wasn't thinking about clipping values. This trick may give some improvement in regression, but LibFM also easily wins in classification.

Maybe you know a reason? FastFM uses different priors or something else?

chezou commented 8 years ago

@arogozhnikov BTW, you need standardization especially for pyFM https://github.com/coreylynch/pyFM/issues/3#issuecomment-99513662

arogozhnikov commented 8 years ago

@chezou all the features are dummy (0-1) and table should be sparse. No, for the tests I am running this step neither needed nor possible.

ibayer commented 8 years ago

@arogozhnikov Great comparison, I have a few suggestions that could make the evaluation even more useful for other people.

  1. Provide the exact version of the software that you are testing.
  2. You find that libFM is faster then fastFM; I fixed a runtime regression bug in https://github.com/ibayer/fastFM-core/commit/d57a86600ad3e6acf22c69436967f04b7f19ee17 , is this still true for the most recent release?
  3. Use clipping to make the performance comparison more meaningful (it make quite a difference in some cases).
  4. You state for fastFM "supports linux, mac os (though some issues with mac os)" is this still true with the binaries that we now have?
  5. Make multiple runs with different seeds to give the reader an idea of the randomness in the results.

As is, I'm not convinced that libFM is faster and performs better then fastFM for MCMC regression. I have done less comprehensive comparisons for MCMC classification but the algorithm / prior should be the same in both libraries. I would be interested to look into it if you can clearly show that libFM dominates fastFM systematically for MCMC classification.

arogozhnikov commented 8 years ago

@ibayer Thanks for comments.

  1. yup, you're right. 2-3. ok, I'll give a try
  2. I don't have clean Mac OS (this was not trivial to install unfortunately) - but I asked a friend to try and seems pip install works fine on Mac OS. (Also, I see now that travis tests MacOS - so I'll remove this remark).
  3. This is hard part, will take forever and I don't see random seed in LibFM. For smaller tests I can just take different random subsets of data in training. Would this be enough convincing?
ibayer commented 8 years ago

@arogozhnikov It's possible to use a random seed with libFM. "seed", "integer value, default=None" https://github.com/srendle/libfm/blob/master/src/libfm/libfm.cpp#L93