Open gregreich opened 3 years ago
I agree that summary()
methods would be helpful. If each RECOM_XXX
has a summary function (RECOM_AR
can contain the summary of the rules), then the evaluation results could contain a list with the summaries of the used recommenders when keepModel=FALSE
. To make this work, you would have to create a class called RECOM_summary
with a list for the parameter summaries.
I think if evaluationScheme
objects become too large, then an option would be to just recreate the evaluationScheme
using the same data and the same random number seed. Maybe we should add an example somewhere.
I think it would be useful to have
summary()
methods for more classes, for example forevaluationScheme
andRecommender
. The problem is that saving the parametrisation of runs either has to be manually, or, if one want to query the objects directly to retrieve the parameters, there is quite some overhead because they store all the data with them (makingload()
extremely slow).For example, for RECOM_ARs it could contain the output of
summary()
on the correspondingrules
object. This could then be stored in the evaluationResults object by default, to avoid having to store the full recommender usingevaluate(..., keepModel=TRUE)
. ForevaluationScheme
, it could contain the parameters used to set up the scheme (essentially everything but the data).Any objections?