-
Hi, now my evaluation on other datasets is normal. However, when I run the same prcocess:
python tools/calc_metrics_for_dataset.py --real_data_path path/to/sky_timelapse/sky_train/ --fake_data_p…
-
How to add the evaluation indicators you need when evaluating or training the model
-
Hello,
It's a great job! Could you provide the code of performing the quantitative evaluation metrics in your paper, such as CC, SIM, EMD and KL-Div? I did not find it in this project. Thank you…
-
Several information retrieval "tasks" use a few common evaluation metrics including mean average precision (MAP) [1] and recall@k, in addition to what is already supported (e.g. ERR, nDCG, MRR). Somet…
-
In all demo algorithms, `update_objective_interval = 10`.
https://github.com/SyneRBI/PETRIC/blob/fb14c8772213da4852662344eec6dfca53a3b209/main_BSREM.py#L35
1. Are we allowed to change that valu…
-
We would like to ask about the output of the evaluation code.
As written in Github instruction, we run
$ bash scripts/eval_estimate.sh
It seems there are no image outputs and we are wondering i…
-
Hello author, I have checked the dataset you provided. Most of the images are blurry, but a few are clear. If all the images in the dataset were blurry, would your work still work?
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I am able to calculate 4 metrics {'faithfulness': 1.0000, 'an…
-
Excuse the author, I browsed your test_xnet3d.py,but didn't find where your evaluation metrics (dice,jaccard,hd,asd) are calculated? I would be grateful for your answer!
-
> Please provide us with the following information:
> ---------------------------------------------------------------
### This issue is for a: (mark with an `x`)
```
- [x] bug report -> please…