Closed kasper2619 closed 3 years ago
@mhahsler: Did you have time to consider the above question? :)
No rush but I would really like understand if I am doing anything wrong here...
Hi,
sorry for the delay. The way you have this set up results in:
> as(dat_train, "list")[1:3]
$`1`
[1] "p1" "p3" "p5"
$`2`
[1] "p1" "p3" "p5"
$`3`
[1] "p1" "p3" "p5"
> as(pre, "list")[1:3]
$`1`
[1] "p2" "p4"
$`2`
[1] "p2" "p4"
$`3`
[1] "p2" "p4"
There are no true positives and that is why TP is zero.
You should train on training data and test against test data using an evaluation scheme. The relevant part from ? calcPredictionAccuracy
:
data(MSWeb)
MSWeb10 <- sample(MSWeb[rowCounts(MSWeb) >10,], 50)
e <- evaluationScheme(MSWeb10, method="split", train=0.9,
k=1, given=3)
e
## create a user-based CF recommender using training data
r <- Recommender(getData(e, "train"), "UBCF")
## create predictions for the test data using known ratings (see given above)
p <- predict(r, getData(e, "known"), type="topNList", n=10)
p
calcPredictionAccuracy(p, getData(e, "unknown"), given=3)
calcPredictionAccuracy(p, getData(e, "unknown"), given=3, byUser = TRUE)
Please give it a try and let me know if the results make more sense. I need to test if the all-but-x schemes work correctly.
I think I have found the issue in the code relating to calcPredictionAccuracy
with negative values for given. I have added a bug fix to the development version on GitHub.
Cool.
I will test it out :)
Works like a charm now.
Thx :)
Hi.
Thank you for a great package.
I cannot get the output of "calcPredictionAccuracy" to make sense. Traning and evaluation of the recommender models works well and produces fine results.
However, when I use calcPredictionAccuracy I get only zero values in the TP columns. That does not align with the results I see when I train the model.
Can you shed some light over what I might be doing wrong, if anything? Reproducible example is provided below. See how the TP columns is consistently 0 even though the recommender itself should be performing well: