-
### Description
Hello!
I encountered an issue while evaluating the BPR (Bayesian Personalized Ranking) model with basically the same code provided in the example on a different dataset. Specifical…
-
The **Exposure Metric** in fairness for ranking systems refers to the measurement of how visible or accessible items (such as search results, recommendations, or candidates) are to users, with a focus…
-
Several information retrieval "tasks" use a few common evaluation metrics including mean average precision (MAP) [1] and recall@k, in addition to what is already supported (e.g. ERR, nDCG, MRR). Somet…
-
I need to use the nonparametric ranking correlation coefficients and statistical tests [Kendall tau-b](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.kendalltau.html#scipy.stat…
-
#### Is your feature request related to a problem? Please describe.
Hi everyone!
I have been researching ways to quantify fairness in rankings and think that this would be a great addition to the Fa…
-
Dosent matter on what dataset but got this error
test.py
W2L
---------loading data-------
ASR & VSR frozen
08-15 10:25:53 Model para number = 15.01
best.model
model loading down
Traceback (…
-
### Description
Certain ranking metrics currently take a considerably long time for a single calculation output.
### Other Comments
It could be due to:
1. Some slow calculation in the `ranki…
-
Length of dataset: 0
Traceback (most recent call last):
File "/content/UniversalFakeDetect/train.py", line 70, in
ap, r_acc, f_acc, acc = validate(model.model, val_loader)
File "/content/…
-
Hi,
How can I compute non-ranking metrics or univariate metrics, such as AUC, or recall?
(i.e. computing metrics after unfurling the list of examples, similar to what is done in the univariate score…
-
You are only able to add a max of 1 custom mood icon.
Ideally: add as many as you like and have the ability to choose them from the mood picker as well (if you have a custom mood icon right now yo…