-
Hi there,
When loading test datasets, your code used "sklearn.preprocessing.StandardScaler" to scale the original datasets. So, in order to produce the correct testing results, it is expected to sc…
-
Hi! Thanks for your great work.
I'm trying to reproduce your results, and I would like to know if you could make your evaluation script available. I have tried to use LION repo evaluation script, b…
-
Hi,
I wanted to know how would we fetch the batch id/index of the eval dataset in ```preprocess_logits_for_metrics()``` ?
Thanks in advance!
-
Dear Ellis,
Thank you for your earlier responses, I have managed to run fivefold cross-validation training on the BraTS dataset. As you know the dice metric used here is the loss function. Similarl…
-
The tests so far are tightly coupled to the implementation rather than the interface. Changes to the internals often cause tests to fail because of small changes to timestamps or chunking (e.g. #5). T…
-
After training the PyTorch version for a while, the metrics will not change. I trained twice and this problem started at 454 epochs.
![image](https://github.com/user-attachments/assets/2df1b5d0-70bb-…
-
It takes two hours to evaluate two indicators with 500 data records. Is there any method to accelerate the evaluation?
-
**Is your feature request related to a problem? Please describe.**
The current evaluation metrics in `evaluate/anomaly.py` assume that a ground truth available. However, in many time series anomaly d…
-
Following the documentation (https://www.tensorflow.org/recommenders/examples/efficient_serving#evaluating_the_approximation), I am trying to compare the performance the ScaNN evaluation and BruteForc…
-
Firstly, thank you for better work. I am a new hand for lane detection and have some questions about the metrics (TP ,FP and FN). These metrics are not similar to evaluation results based on pixels th…