dselivanov / rsparse

Fast and accurate machine learning on sparse matrices - matrix factorizations, regression, classification, top-N recommendations.
https://www.slideshare.net/DmitriySelivanov/matrix-factorizations-for-recommender-systems
170 stars 31 forks source link

user and item biases in WRMF and explicit feedback #44

Closed dselivanov closed 3 years ago

dselivanov commented 3 years ago

As of https://github.com/rexyai/rsparse/commit/35247b4545fa44c1b88f9940d65641a86a5bd4b8

without biases

library(rsparse)
library(lgr)
lg = get_logger('rsparse')
lg$set_threshold('debug')
data('movielens100k')
options("rsparse_omp_threads" = 1)

train = movielens100k

set.seed(1)
model = WRMF$new(rank = 10,  lambda = 1, feedback  = 'explicit', solver = 'cholesky', with_bias = FALSE)
user_emb = model$fit_transform(train, n_iter = 10, convergence_tol = -1)

INFO [23:09:40.158] starting factorization with 1 threads INFO [23:09:40.268] iter 1 loss = 4.4257 INFO [23:09:40.302] iter 2 loss = 1.2200 INFO [23:09:40.332] iter 3 loss = 0.8617 INFO [23:09:40.361] iter 4 loss = 0.7752 INFO [23:09:40.391] iter 5 loss = 0.7398 INFO [23:09:40.420] iter 6 loss = 0.7191 INFO [23:09:40.456] iter 7 loss = 0.7046 INFO [23:09:40.488] iter 8 loss = 0.6935 INFO [23:09:40.522] iter 9 loss = 0.6845 INFO [23:09:40.555] iter 10 loss = 0.6769

with biases

set.seed(1)
model = WRMF$new(rank = 10,  lambda = 1, feedback  = 'explicit', solver = 'cholesky', with_bias = TRUE)
user_emb = model$fit_transform(train, n_iter = 10, convergence_tol = -1)

INFO [23:10:06.605] starting factorization with 1 threads INFO [23:10:06.637] iter 1 loss = 0.8411 INFO [23:10:06.671] iter 2 loss = 0.6251 INFO [23:10:06.704] iter 3 loss = 0.5950 INFO [23:10:06.736] iter 4 loss = 0.5820 INFO [23:10:06.769] iter 5 loss = 0.5751 INFO [23:10:06.805] iter 6 loss = 0.5712 INFO [23:10:06.840] iter 7 loss = 0.5688 INFO [23:10:06.875] iter 8 loss = 0.5673 INFO [23:10:06.916] iter 9 loss = 0.5663 INFO [23:10:06.951] iter 10 loss = 0.5657

cc @david-cortes

david-cortes commented 3 years ago

With biases plus mean centering:

DEBUG [22:10:21.514] initializing biases 
INFO  [22:10:21.548] starting factorization with 1 threads 
INFO  [22:10:21.582] iter 1 loss = 0.8305 
INFO  [22:10:21.607] iter 2 loss = 0.6170 
INFO  [22:10:21.631] iter 3 loss = 0.5822 
INFO  [22:10:21.655] iter 4 loss = 0.5662 
INFO  [22:10:21.680] iter 5 loss = 0.5568 
INFO  [22:10:21.706] iter 6 loss = 0.5507 
INFO  [22:10:21.730] iter 7 loss = 0.5464 
INFO  [22:10:21.755] iter 8 loss = 0.5431 
INFO  [22:10:21.781] iter 9 loss = 0.5405 
INFO  [22:10:21.806] iter 10 loss = 0.5383

With rnorm init + biases + mean centering:

DEBUG [22:12:08.799] initializing biases 
INFO  [22:12:08.843] starting factorization with 1 threads 
INFO  [22:12:08.883] iter 1 loss = 0.7696 
INFO  [22:12:08.912] iter 2 loss = 0.6211 
INFO  [22:12:08.938] iter 3 loss = 0.5836 
INFO  [22:12:08.963] iter 4 loss = 0.5660 
INFO  [22:12:08.989] iter 5 loss = 0.5557 
INFO  [22:12:09.015] iter 6 loss = 0.5489 
INFO  [22:12:09.041] iter 7 loss = 0.5442 
INFO  [22:12:09.067] iter 8 loss = 0.5408 
INFO  [22:12:09.093] iter 9 loss = 0.5382 
INFO  [22:12:09.120] iter 10 loss = 0.5362

Although, the loss function is taking the regularization incorrectly so these aren't final numbers yet.

dselivanov commented 3 years ago

@david-cortes

Although, the loss function is taking the regularization incorrectly so these aren't final numbers yet.

Could you please elaborate more on that?

david-cortes commented 3 years ago

It had a bug in which it was taking the squared sum of X twice instead of taking Y. Also I think it's adding a row of all-ones into the calculation.

dselivanov commented 3 years ago

It had a bug in which it was taking the squared sum of X twice instead of taking Y

Ok, this fixed now

Also I think it's adding a row of all-ones into the calculation.

~Doesn't seem so as arma::span(1, X.n_rows - 1) skips first and last row (ones and biases)~ Ah, you are right, need to be -2

dselivanov commented 3 years ago
# no bias
INFO  [14:55:08.288] starting factorization with 1 threads 
INFO  [14:55:08.394] iter 1 loss = 6.0649 
INFO  [14:55:08.424] iter 2 loss = 0.8184 
INFO  [14:55:08.450] iter 3 loss = 0.7426 
INFO  [14:55:08.480] iter 4 loss = 0.7154 
INFO  [14:55:08.511] iter 5 loss = 0.6984 
INFO  [14:55:08.546] iter 6 loss = 0.6861 
INFO  [14:55:08.581] iter 7 loss = 0.6767 
INFO  [14:55:08.618] iter 8 loss = 0.6691 
INFO  [14:55:08.650] iter 9 loss = 0.6629 
INFO  [14:55:08.691] iter 10 loss = 0.6577 

# user + item bias
INFO  [14:55:18.805] starting factorization with 1 threads 
INFO  [14:55:18.838] iter 1 loss = 0.7335 
INFO  [14:55:18.873] iter 2 loss = 0.5918 
INFO  [14:55:18.907] iter 3 loss = 0.5624 
INFO  [14:55:18.943] iter 4 loss = 0.5496 
INFO  [14:55:18.982] iter 5 loss = 0.5427 
INFO  [14:55:19.022] iter 6 loss = 0.5384 
INFO  [14:55:19.064] iter 7 loss = 0.5355 
INFO  [14:55:19.101] iter 8 loss = 0.5338 
INFO  [14:55:19.148] iter 9 loss = 0.5328 
INFO  [14:55:19.184] iter 10 loss = 0.5323 

# user + item bias + better init
DEBUG [15:21:49.763] initializing biases 
INFO  [15:21:49.767] starting factorization with 1 threads 
INFO  [15:21:49.804] iter 1 loss = 0.7281 
INFO  [15:21:49.842] iter 2 loss = 0.5933 
INFO  [15:21:49.880] iter 3 loss = 0.5619 
INFO  [15:21:49.920] iter 4 loss = 0.5484 
INFO  [15:21:49.961] iter 5 loss = 0.5413 
INFO  [15:21:50.000] iter 6 loss = 0.5370 
INFO  [15:21:50.034] iter 7 loss = 0.5341 
INFO  [15:21:50.074] iter 8 loss = 0.5321 
INFO  [15:21:50.116] iter 9 loss = 0.5308 
INFO  [15:21:50.148] iter 10 loss = 0.5298

# user + item bias + better init + global
DEBUG [15:00:05.874] initializing biases 
INFO  [15:00:05.962] starting factorization with 1 threads 
INFO  [15:26:17.413] iter 1 loss = 0.7213 
INFO  [15:26:17.429] iter 2 loss = 0.5798 
INFO  [15:26:17.440] iter 3 loss = 0.5461 
INFO  [15:26:17.451] iter 4 loss = 0.5317 
INFO  [15:26:17.464] iter 5 loss = 0.5241 
INFO  [15:26:17.471] iter 6 loss = 0.5194 
INFO  [15:26:17.481] iter 7 loss = 0.5164 
INFO  [15:26:17.493] iter 8 loss = 0.5144 
INFO  [15:26:17.500] iter 9 loss = 0.5130 
INFO  [15:26:17.507] iter 10 loss = 0.5121
david-cortes commented 3 years ago

I think the more correct way of adding the regularization with biases to the loss would be to exclude only the row that has ones while still adding the row that has biases, as the regularization is also applied to the user/item biases.

david-cortes commented 3 years ago

I'm pretty sure there is still some unintended data copying going on with the current approach. I tried timing it with the ML10M data, got these results:

Adding the biases made it take 49% longer to fit the model.

For comparison purposes, these are the same times using the package cmfrec, which has a less efficient approach for the biases (uses rank+1 matrices and copies/replaces bias/constant component at each iteration):

It took only 26% longer with the biases added.

dselivanov commented 3 years ago

@david-cortes should be fixed now, can you try? I'm surprised cmfrec is almost 2x faster! I would expect arma to be translated into effecient blas calls.

david-cortes commented 3 years ago

Now it's producing a segmentation fault.

david-cortes commented 3 years ago

Actually, the segmentation fault was not from 3731ca0a4c8c9c543e1bb4311aae0b6f64e76608, but from the commits that follow. However, this commit actually makes it slower:

Unit: seconds
                                                                                                                                                                                                                                                             expr
 {     m = rsparse::WRMF$new(rank = 40, lambda = 0.05, dynamic_lambda = TRUE,          feedback = "explicit", with_global_bias = TRUE, with_user_item_bias = TRUE,          solver = "conjugate_gradient")     A = m$fit_transform(ML10M, convergence_tol = -1) }
      min       lq     mean   median       uq      max neval
 21.44936 21.44936 21.44936 21.44936 21.44936 21.44936     1
david-cortes commented 3 years ago

Ran it again after using "clean and rebuild" from the current commit at the master branch, now it didn't segfault anymore, and took the same time as before with the biases (~16s). Perhaps I had some issue with the package calling the wrong shared object functions.

david-cortes commented 3 years ago

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s vs ~10.7s). But when using biases now it returns NANs with float.

dselivanov commented 3 years ago

I test it with following code

library(Matrix)
set.seed(1)
m = rsparsematrix(100000, 10000, 0.01)
m@x = sample(5, size = length(m@x), replace = T)
rank = 8
n_user = nrow(m)
n_item = ncol(m)
user_factors = matrix(rnorm(n_user * rank, 0, 0.01), nrow = rank, ncol = n_user)
item_factors = matrix(rnorm(n_item * rank, 0, 0.01), nrow = rank, ncol = n_item)

library(rsparse)
system.time({
  res = rsparse:::als_explicit_double(
    m_csc_r = m,
    X = user_factors,
    Y = item_factors,
    cnt_X = numeric(ncol(user_factors)),
    lambda = 0,
    dynamic_lambda = FALSE,
    n_threads = 1,
    solver = 0L,
    cg_steps = 1L,
    with_biases = FALSE,
    is_x_bias_last_row = TRUE)
})

user system elapsed 0.482 0.003 0.485

rank = 10
n_user = nrow(m)
n_item = ncol(m)
user_factors = matrix(rnorm(n_user * rank, 0, 0.01), nrow = rank, ncol = n_user)
user_factors[1, ] = rep(1.0, n_user)

item_factors = matrix(rnorm(n_item * rank, 0, 0.01), nrow = rank, ncol = n_item)
item_factors[rank, ] = rep(1.0, n_item)

system.time({
  res = rsparse:::als_explicit_double(
    m_csc_r = m,
    X = user_factors,
    Y = item_factors,
    lambda = 0,
    cnt_X = numeric(ncol(user_factors)),
    dynamic_lambda = 0,
    n_threads = 1,
    solver = 0L,
    cg_steps = 1L,
    with_biases = T,
    is_x_bias_last_row = TRUE)
})

user system elapsed 0.624 0.006 0.629

The later used to be ~0.9-1

dselivanov commented 3 years ago

@david-cortes check https://github.com/rexyai/rsparse/commit/7fcb1d39b6ff9ff7acabe3cf5af8f9e2fa208d34 - I've remove subview.

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s vs ~10.7s). But when using biases now it returns NANs with float.

I will take a look

dselivanov commented 3 years ago

Another interesting thing however: using float precision makes it take less than half the time when not using biases (~4.7s vs ~10.7s).

This seems related to LAPACK which is used by the system. If you only use LAPACK which is shipped with R (which only works with double) then float pkg provides it's own single precision LAPACK. Which seems more then twice faster compared to R's reference LAPACK. If you have high performance system-wide BLAS and LAPACK then float pkg will detect it and use it. And rsparse and arma will also link to system-wide BLAS and LAPACK.

But when using biases now it returns NANs with float.

Haven't noticed that. Reproducible example will help.

david-cortes commented 3 years ago

I'm using openblas. Tried ldd on the generated .so and it doesn't look like it's linking to anything from float:

ldd rsparse.so
        linux-vdso.so.1 (0x00007ffc1bdf0000)
        liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007fbafff81000)
        libblas.so.3 => /lib/x86_64-linux-gnu/libblas.so.3 (0x00007fbafff1c000)
        libR.so => /lib/libR.so (0x00007fbaffa6c000)
        libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbaff89f000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbaff75b000)
        libgomp.so.1 => /lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fbaff71b000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbaff6ff000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbaff53a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fbb006e1000)
        libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007fbafd235000)
        libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007fbafcf7f000)
        libreadline.so.8 => /lib/x86_64-linux-gnu/libreadline.so.8 (0x00007fbafcf28000)
        libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007fbafce98000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fbafce6d000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fbafce5a000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbafce3d000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbafce37000)
        libicuuc.so.67 => /lib/x86_64-linux-gnu/libicuuc.so.67 (0x00007fbafcc4f000)
        libicui18n.so.67 => /lib/x86_64-linux-gnu/libicui18n.so.67 (0x00007fbafc94a000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbafc926000)
        libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007fbafc8dd000)
        libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fbafc8ae000)
        libicudata.so.67 => /lib/x86_64-linux-gnu/libicudata.so.67 (0x00007fbafad95000)
david-cortes commented 3 years ago

Here's an example:

library(rsparse)
library(lgr)
lg = get_logger('rsparse')
lg$set_threshold('debug')
data('movielens100k')
options("rsparse_omp_threads" = 16)

X = movielens100k

set.seed(1)
model = WRMF$new(rank = 100,  lambda = 0.05, dynamic_lambda = TRUE,
                 feedback  = 'explicit', solver = 'conjugate_gradient',
                 with_user_item_bias = TRUE, with_global_bias=TRUE,
                 precision = "float")
user_emb = model$fit_transform(X, n_iter = 10, convergence_tol = -1)
DEBUG [21:32:16.367] initializing biases 
INFO  [21:32:16.374] starting factorization with 16 threads 
INFO  [21:32:16.388] iter 1 loss = NaN 
Error in if (loss_prev_iter/loss - 1 < convergence_tol) { : 
  missing value where TRUE/FALSE needed
dselivanov commented 3 years ago

Yes, that looks correct - it should not link to float if there is system-wide BLAS and LAPACK with float support. Not sure about more than linear speed-up. Maybe cache-efficiency due to lower memory footprint...

On Mon, Dec 7, 2020 at 11:29 PM david-cortes notifications@github.com wrote:

I'm using openblas. Tried ldd on the generated .so and it doesn't look like it's linking to anything from float:

ldd rsparse.so linux-vdso.so.1 (0x00007ffc1bdf0000) liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007fbafff81000) libblas.so.3 => /lib/x86_64-linux-gnu/libblas.so.3 (0x00007fbafff1c000) libR.so => /lib/libR.so (0x00007fbaffa6c000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbaff89f000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbaff75b000) libgomp.so.1 => /lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fbaff71b000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbaff6ff000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbaff53a000) /lib64/ld-linux-x86-64.so.2 (0x00007fbb006e1000) libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007fbafd235000) libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007fbafcf7f000) libreadline.so.8 => /lib/x86_64-linux-gnu/libreadline.so.8 (0x00007fbafcf28000) libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007fbafce98000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fbafce6d000) libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fbafce5a000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbafce3d000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbafce37000) libicuuc.so.67 => /lib/x86_64-linux-gnu/libicuuc.so.67 (0x00007fbafcc4f000) libicui18n.so.67 => /lib/x86_64-linux-gnu/libicui18n.so.67 (0x00007fbafc94a000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbafc926000) libquadmath.so.0 => /lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007fbafc8dd000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fbafc8ae000) libicudata.so.67 => /lib/x86_64-linux-gnu/libicudata.so.67 (0x00007fbafad95000)

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/rexyai/rsparse/issues/44#issuecomment-740130656, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHC5XMD624ENNT4JVTKMQDSTUULNANCNFSM4UGWPJ5A .

-- Regards Dmitriy Selivanov

dselivanov commented 3 years ago

Here's an example:

On my machine it works fine... Numerical issues on a particular setup?

DEBUG [23:39:40.707] initializing biases 
INFO  [23:39:40.748] starting factorization with 16 threads 
INFO  [23:39:40.787] iter 1 loss = 0.8346 
INFO  [23:39:40.804] iter 2 loss = 0.6115 
INFO  [23:39:40.821] iter 3 loss = 0.5629 
INFO  [23:39:40.837] iter 4 loss = 0.5554 
INFO  [23:39:40.855] iter 5 loss = 0.5514 
INFO  [23:39:40.873] iter 6 loss = 0.5494 
INFO  [23:39:40.888] iter 7 loss = 0.5483 
INFO  [23:39:40.908] iter 8 loss = 0.5477 
INFO  [23:39:40.924] iter 9 loss = 0.5474 
INFO  [23:39:40.941] iter 10 loss = 0.5473 
> 
> 
> 
> user_emb
# A float32 matrix: 943x102
#    [,1]      [,2]      [,3]       [,4]       [,5]
# 1     1  0.310238 -0.187435  0.0070898  0.3378164
# 2     1  0.318511 -0.032087  0.0918757  0.3566173
# 3     1 -0.061583 -0.164409  0.0510152  0.1838035
# 4     1  0.216120 -0.049797 -0.0742759 -0.0062153
# 5     1  0.246393  0.013174  0.0510887 -0.1232737
# 6     1  0.418677 -0.090236 -0.4091501 -0.2156503
# 7     1  0.185473  0.050313 -0.1264562 -0.1054751
# 8     1  0.044565 -0.153883 -0.1510867 -0.0126681
# 9     1  0.240177 -0.168154 -0.3026042 -0.2351764
# 10    1 -0.033645 -0.017147  0.0695353 -0.0026545
# ...
david-cortes commented 3 years ago

I don't think it's an issue with numerical precision, because it works fine under commit a7860f16a0902beae3b25e0fec054d52469fa94d and the same problem seems to occur with MKL.

By the way, the current commit on master doesn't compile on windows, something to do with unsigned integer types being undefined.

dselivanov commented 3 years ago

I'm not sure what is wrong in your case. I've tried latest commit on my ubuntu workstation with openblas and it works normally, not throws NaN. What compilation flags do you use?

david-cortes commented 3 years ago

I'm using the default flags:

david@debian:~$ R CMD config CXXFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g
david@debian:~$ R CMD config CFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g
david@debian:~$ R CMD config FFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong
david@debian:~$ R CMD config FCFLAGS
-g -O2 -fdebug-prefix-map=/build/r-base-oKyfjH/r-base-4.0.3=. -fstack-protector-strong
david@debian:~$ R CMD config CXXPICFLAGS
-fpic
david-cortes commented 3 years ago

By the way, the non-negative CD algorithm still works fine when the data is mean centered.

dselivanov commented 3 years ago

How product of two non negative vectors can give potentially negative value (after centering)

david-cortes commented 3 years ago

Thing is, if the data is non-negative, the mean will also be non-negative.

Sorry, misunderstood - it won't give a negative value, but the algorithm still works by outputting something close to zero.

dselivanov commented 3 years ago

Sorry, misunderstood - it won't give a negative value, but the algorithm still works by outputting something close to zero.

Yes, it is kind of working, but loss is huge and model is not that useful.

david-cortes commented 3 years ago

I'm not sure what is wrong in your case. I've tried latest commit on my ubuntu workstation with openblas and it works normally, not throws NaN. What compilation flags do you use?

I think this has to do with AVX instructions and array padding. If I disable newer instructions by setting -march=x86-64 it works correctly, but with -march=native it won't. These are the instruction set extensions in my CPU:

Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr s
                                 se sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop
                                 _tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe
                                  popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misa
                                 lignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_ll
                                 c mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx s
                                 map clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbr
                                 v svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshol
                                 d avic v_vmsave_vmload vgif overflow_recov succor smca

Perhaps something to do with the unsafe keywork?

EDIT: hmm, actually now it's working correctly with -march=native as of the current commit at master.

EDIT2: ok, I actually realize now that commit 7fcb1d39b6ff9ff7acabe3cf5af8f9e2fa208d34 replaced subview with Mat. That's what fixes it.

dselivanov commented 3 years ago

EDIT2: ok, I actually realize now that commit 7fcb1d39b6ff9ff7acabe3cf5af8f9e2fa208d34 replaced subview with Mat

Yes, I had segfaults this subviews as well...

dselivanov commented 3 years ago

I think we are done here. There will a separate thread for a model with biases withimplicit feedback data.

At the moment I'm little bit short in time. @david-cortes feel free to make a shot for a biases on implicit feedback model (if you are interested of course).

david-cortes commented 3 years ago

@dselivanov I’m not so sure it’s something desirable to have actually. I tried playing with centering and biases with implicit-feedback data, and I see that adding user biases usually gives a very small lift in metrics like HR@5, but item biases makes them much worse.

You can play with cmfrec (version from git, the one from CRAN has bugs for this use-case) like this with e.g. the lastFM data or similar, which would fit the same model as WRMF with feedback="implicit":

library(cmfrec)
Xvalues <- Xcoo@x
Xcoo@x <- rep(1, length(Xcoo@x))
model <- CMF(Xcoo, weight=Xvalues, NA_as_zero=TRUE,
             center=TRUE, user_bias=TRUE, item_bias=TRUE)