-
I’ve also noticed that the evaluation of the regression model includes classification metrics such as accuracy, precision, recall, F1 score, and confusion matrix. These metrics are specifically desig…
-
Hello,
After running the logbatcher_eval.py script with the command:
```
python logbatcher_eval.py --config logbatcher_2k
```
I observed that the evaluation output provides metrics such as Grou…
-
Here are useful ressources to look into:
- Our lab metrics definition: https://github.com/ivadomed/MetricsReloaded
- Panoptica: [Panoptica -- instance-wise evaluation of 3D semantic and instance se…
-
Excellent work! Could you please make the metrics evaluation code public? We are having difficulty using the torch-fidelity tool.
-
Hi, authors!
Sorry to bother you again. In tab3, I noticed that you have used the LPIPS as one of the evaluation metrics. However, to my knowledge, the LPIPS is the LOWER the BETTER. If it is true,…
-
### Describe the bug
When evaluating the model both centrally in the server and federated in the clients, after server finishes evaluating the model for the current round and sends the `evaluate mess…
-
We got following issues while running your code:
```bash
python test.py
2024-11-20 13:32:51.141934: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly diff…
-
Hi, thank you for your work!
I’m particularly interested in the evaluation metrics mentioned in the paper. I was wondering if there is any available code or scripts to reproduce the evaluation metr…
-
I want to be able to perform post-evaluation query filtering after evaluating a model on a retrieval benchmark. In other words, after evaluation is ran I want to be able to select a subset of the test…
-
I want to get the evaluation metrics of RANA datasets, including the relighting image, albedo, normal error in the paper, could you please provide the running scripts?