NicolasHug / Surprise

A Python scikit for building and analyzing recommender systems
http://surpriselib.com
BSD 3-Clause "New" or "Revised" License
6.42k stars 1.01k forks source link

How to do kfold crossvalidation on trainset (eg splitting movielens-100k using u1 split. then kfold crossvalidation on u1.base, test on u1.test) #471

Closed seanv507 closed 1 year ago

seanv507 commented 1 year ago

Description

I would like to create benchmarks like https://paperswithcode.com/sota/collaborative-filtering-on-movielens-100k, which reports test rmse on u1 splits.

To optimise the model I therefore only want to crossvalidate for hyperparameters on u1.base, before finally testing on u1.test.

seanv507 commented 1 year ago

Solution

use Dataset.load_from_file to load u1.base

test_data = Dataset.load_from_file(test_file % 1, reader=reader).build_full_trainset().build_testset()
predictions = gs1.test(test_data)
accuracy.rmse(predictions, verbose=True)