benfred / implicit

Fast Python Collaborative Filtering for Implicit Feedback Datasets
https://benfred.github.io/implicit/
MIT License
3.57k stars 612 forks source link

ALS without using cython not identical with `implicit` result #721

Open bohyunshin opened 2 months ago

bohyunshin commented 2 months ago

Hello! I'm newbie to recommender system and cython, so I was trying to what is going on by reviewing ALS code without cython written here.

After understanding mathematical derivation, I now want to compare als results using implicit library and using plain least squares written in here.

The reproducible code is given below. Please note that to reproduce results, I fixed user_factors and item_factors using np.random.seed(1).

import numpy as np
from numpy.testing import assert_almost_equal

import implicit
from implicit.datasets.movielens import get_movielens
from implicit.nearest_neighbours import bm25_weight
from implicit.utils import nonzeros

def least_squares(Cui, X, Y, regularization, num_threads=0):
    """For each user in Cui, calculate factors Xu for them
    using least squares on Y.

    Note: this is at least 10 times slower than the cython version included
    here.
    """
    users, n_factors = X.shape
    YtY = Y.T.dot(Y)

    for u in range(users):
        X[u] = user_factor(Y, YtY, Cui, u, regularization, n_factors)

    return X

def user_linear_equation(Y, YtY, Cui, u, regularization, n_factors):
    # Xu = (YtCuY + regularization * I)^-1 (YtCuPu)
    # YtCuY + regularization * I = YtY + regularization * I + Yt(Cu-I)

    # accumulate YtCuY + regularization*I in A
    A = YtY + regularization * np.eye(n_factors)

    # accumulate YtCuPu in b
    b = np.zeros(n_factors)

    for i, confidence in nonzeros(Cui, u):
        factor = Y[i]

        if confidence > 0:
            b += confidence * factor
        else:
            confidence *= -1

        A += (confidence - 1) * np.outer(factor, factor)
    return A, b

def user_factor(Y, YtY, Cui, u, regularization, n_factors):
    # Xu = (YtCuY + regularization * I)^-1 (YtCuPu)
    A, b = user_linear_equation(Y, YtY, Cui, u, regularization, n_factors)
    return np.linalg.solve(A, b)

def item_factor(X, XtX, Cui, u, regularization, n_factors):
    # Yu = (XtCuX + regularization * I)^-1 (XtCuPu)
    A, b = user_linear_equation(X, XtX, Cui, u, regularization, n_factors)
    return np.linalg.solve(A, b)

params = {
    'factors':100,
    'iterations':1,
    'regularization':0.01,
    'random_state':42
}
imp_als = implicit.als.AlternatingLeastSquares(**params)

titles, ratings = get_movielens("1m")
min_rating = 4

# remove things < min_rating, and convert to implicit dataset
# by considering ratings as a binary preference only
ratings.data[ratings.data < min_rating] = 0
ratings.eliminate_zeros()
ratings.data = np.ones(len(ratings.data))
ratings = (bm25_weight(ratings, B=0.9) * 5).tocsr()
user_ratings = ratings.T.tocsr()

M,N = user_ratings.shape
np.random.seed(1)
user_factors = np.random.rand(M, params["factors"]).astype(np.float32) * 0.01
item_factors = np.random.rand(N, params["factors"]).astype(np.float32) * 0.01

imp_als.user_factors = user_factors.copy()
imp_als.item_factors = item_factors.copy()
imp_als.fit(user_ratings)

for _ in range(params["iterations"]):
    user_factors = least_squares(user_ratings, user_factors, item_factors, 0.01, num_threads=0)
    item_factors = least_squares(user_ratings.T.tocsr(), item_factors, user_factors, 0.01, num_threads=0)

assert_almost_equal(imp_als.user_factors, user_factors)

Because both of implicit and least_squares function starts optimization using same user_factors and item_factors, I expected same parameter results after one iteration. However, they are slightly different.

user_factors[:10,:10] from least_squares is given below

0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
6.495985,1.4662981,-1.8916339,-5.988758,-3.5102134,-5.311575,0.5843167,2.909493,5.33782,0.03934978
9.518679,-0.9138667,10.540882,-5.8474555,-0.65762514,2.369369,1.3994246,4.401218,-4.4654136,0.77081233
3.9342382,-1.6668738,6.2562375,-2.157711,-5.271799,-5.8932986,0.07674352,3.6055293,-0.9393872,1.518592
-0.7043858,-1.9429125,1.4211193,1.9970868,2.2399387,-3.479127,-0.56343424,-0.41102177,0.7964555,-0.8715794
4.6698475,5.967709,3.6279092,-0.98851514,-3.6496701,-0.29852667,-9.018221,-7.6457853,-10.822715,-3.4580176
-0.53815067,-4.7590837,0.26157802,8.832465,-1.9944521,1.3366369,-2.082549,-1.1665554,3.4138582,1.6522735
0.06012591,-0.0659963,-1.3846772,3.2358499,2.7000408,0.32751244,-0.8692929,3.219261,2.274268,2.8823972
-2.1491199,7.009856,4.531096,-10.110956,-3.4792747,-4.5679045,1.0267107,2.4903963,1.8104258,-12.1208
-0.15007187,3.2462509,1.547862,1.5669563,0.2593078,0.48956275,1.95473,2.8914256,0.41950047,-2.8517656

imp_als.user_factors[:10,:10] from least_squares is given below

0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
6.40629,1.6689732,-1.9243582,-6.2280293,-3.3796072,-5.4606185,0.86383134,2.9743493,5.3227863,0.0021724757
8.827038,-1.2745501,9.737436,-5.4415026,-0.5468073,2.4743779,1.5968052,4.712061,-4.13798,0.7708695
3.7629926,-1.4328924,6.298655,-2.0645764,-5.0478854,-6.074642,0.14667448,3.7019956,-0.7340085,1.4660487
-0.6895539,-1.9431428,1.447885,1.9779587,2.2503517,-3.4943352,-0.56691366,-0.42262903,0.79547334,-0.84743243
3.8908231,4.478633,4.1031737,-0.21032758,-5.0459957,0.56163824,-9.462846,-7.9513216,-9.629442,-2.9782817
-0.8160355,-4.57127,0.24540702,8.834236,-1.1053919,0.89957446,-1.6664205,-1.4379164,3.5699248,1.6173166
0.06093114,-0.0288332,-1.4310935,3.25625,2.730325,0.27591914,-0.85742223,3.285444,2.275085,2.928403
-2.59787,7.311882,4.697602,-10.138477,-3.3613787,-4.4193087,0.6605658,2.589953,2.0927846,-11.935833
-0.16109583,3.1281085,1.5805378,1.7462423,0.22377923,0.47957367,1.9997323,2.9074786,0.36266166,-2.7396963

We can see that numbers are slightly different, starting from second decimal place generally.

Is there anything that I'm missing? Or these differences are tolerated? Any comments are very appreciated.