Open markustoivonen opened 3 years ago
Hi @markustoivonen
It's hard to say with certainty without looking at the data & code, but here are a few thoughts that might help you:
Exponential
) with a static one (Constant
). The constant kernel will capture a player's baseline score, whereas the dynamic one will capture fluctuations around the baseline over time. If you don't, then the score will revert to zero during long stretches without observations.lscale=1
). This is why the score is dropping very quickly to zero (the prior mean) outside of the precise moment the comparisons happened. lscale
basically tells the model how far away in time should scores be correlated -> here, given the timescale of your data, the model thinks the score at any two time points should essentially be uncorrelated (hence be zero most of the time).Bottom line:
kernel = Exponential(...) + Constant(...)
, the model will then learn a baseline score for each player. Over long stretches without games the score will revert to this baseline (instead of zero).Regarding your question
Also, why in the plot_scores function, we calculate the ms vector with the predict method, but not rather just take the stored values from the scores attributes of an Item?
The scores
attribute contains the mean & variance of the score only at times where the player played a game (you can check this by inspecting ts
). When plotting it looks better if you show the score time-series using regularly spaced time intervals, not necessarily matching the timestamps of games a user played, hence the call to predict
.
Hope this helps!
Hi @lucasmaystre! Thank you for taking the time to give such an thorough answer, I truly appreciate it. :)
I was able to fit the predict curve to the data points, so now the plotting curves makes more sense.
I have a few more follow up questions, hopefully they are not too strenuous.
1) Here is an image of a players kickscore (variance omitted).
The player has only wins except for one loss around December. What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss? One can see that the score stagnates and decreases already before the loss happens. This sort of behaviour is unlikely in my context and I would like to address that.
To put simply, in my context I have two types of players: a) exercises and b) users
Users complete exercises and either win / lose against them, but users or exercises never compete against the same type (user vs user, exercise vs exercise). So then the kickscore represents skill of players and difficulty of the exercise.
I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?
2) In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection? I.e. if Affine + Wiener is the best combo, do you go through all the different possible combinations of kernels (say, limit it to kernel1 + kernel2) and go through different hyperparameters for all the kernel combinations?
Thank you very much!
Hi @markustoivonen sorry for the delay.
What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss.
I don't think there is such a kernel, unfortunately. If you know where the score should change (a priori), you can try to use the PiecewiseConstant
kernel (which allows for discontinuous jumps).
In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection?
That's a great question, and there is no simple answer. In practice, it's an art as much as it is a science—you try combinations that intuitively make sense. Here's a paper that attempts to make this process more rigorous: https://arxiv.org/pdf/1302.4922.pdf but it's still mostly a heuristic search.
In practice, on the datasets I've played with, constant + exponential (or wiener) gives you 95-99% of the performance you get with more "fancy" combinations.
I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?
Yes, agreed with you - exercises don't change so a static kernel makes sense. For players, I could imagine it would make sense to assume the skill is monotonic (i.e., can only increase over time). Unfortunately that's not implemented in kickscore at the moment (but http://proceedings.mlr.press/v9/riihimaki10a/riihimaki10a.pdf could provide a blueprint).
Overall, I think a simple Wiener kernel (which has a constant offset built-in through var_t0
) would be a good starting point for the players.
Hi @lucasmaystre! Thank you for taking the time to give such an thorough answer, I truly appreciate it. :)
I was able to fit the predict curve to the data points, so now the plotting curves makes more sense.
I have a few more follow up questions, hopefully they are not too strenuous.
- Here is an image of a players kickscore (variance omitted).
The player has only wins except for one loss around December. What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss? One can see that the score stagnates and decreases already before the loss happens. This sort of behaviour is unlikely in my context and I would like to address that.
To put simply, in my context I have two types of players: a) exercises and b) users
Users complete exercises and either win / lose against them, but users or exercises never compete against the same type (user vs user, exercise vs exercise). So then the kickscore represents skill of players and difficulty of the exercise.
I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?
- In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection? I.e. if Affine + Wiener is the best combo, do you go through all the different possible combinations of kernels (say, limit it to kernel1 + kernel2) and go through different hyperparameters for all the kernel combinations?
Thank you very much!
The model assumes an unbiased random shift of abilities, so your assumption about the ability does not fall into that category. A simple model that would fit you criteria is actually a simple Elo, or if you want a more sophisticated one, Glicko2 or TrueSkill are good candidates. The graph would not be smooth (you actually only get point estimations at the time of doing the exercises), so keep that in mind.
Hi!
I am running
kickscore
for some data, and when plotting the scores withplot_scores
function, thepredict
method returns mean 0 and variance 1 for all data points except the first and last timestamps. For the final and the last timestamps the predictions are the sames as the values in theItem
sscores
attribute. There are some other anomalies with the score data, but this seems to be far most common one.Here are 3 players plotted, and they all exhibit the same behaviour.
Also, why in the
plot_scores
function, we calculate thems
vector with thepredict
method, but not rather just take the stored values from thescores
attributes of anItem
?Another observation I made was that I have players who have only won matches, but their score at the end is almost the same as in the beginning? Here is a picture of a player who has won roughly 20 matches and lost 0. Shouldn't their trajectory be monotonically increasing, even if the opponents are weak? The data below is from the
scores
attribute of theItem
I am using the
BinaryModel
, withexponential
kernel (var=1
,lscale=1
) and recursive fitter.All help is much appreciated!