-
First of all thank you for providing model files,due to limited resources i was not able to train a good model.I wanted to evaluate the predictions as done in original work here(https://github.com/smi…
-
Hi Team,
Great work ! I was trying to replicate your work at my end and while reading your paper, I'm unable to understand how to evaluate the results. I'm focusing on Open source Table to Text.
…
-
I have downloaded the trained model and have trained one myself. I used eval.sh but it doesn't generate the metrics. How can I get the performance metrics? Does anyone know?
-
Hello! I hope this message finds you well. I have a question regarding the evaluation metrics in the validation part of your work. Specifically, I would like to inquire about the unit to which the CT …
-
### Description
According to "Anomaly scoring is based on overlapping segments: a true positive (TP) if a known anomalous window overlaps any detected windows, a false negative (FN) if a known anomal…
-
Prometheus has some built-in metrics like the counter `prometheus_rule_evaluation_failures_total`, which is incremented any time there's an issue evaluating a recording/alerting rule. This is a conven…
-
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently…
-
Pseudo R2 is the natural candidate as it applies to probabilities as opposed to categories for binary outcomes, and is nicely interpretable. Ideally a suite of things that work for both binary and con…
-
Hi, at first, I want to say thanks this wonderful work.
Actually, I want save the reason of evaluation.
I mean, line 69 to line 106 at https://github.com/explodinggradients/ragas/blob/main/src/rag…
-
Hello, could you report the evaluations for example accuracy and F1 scores for the tasks that are not reported in the paper(especially the newly added ones)? It would help greatly to use the model and…