-
Hi,
I have been testing out your new program which I was hoping would filling a niche in our tumor sv pipeline. After running svaba on a number of in-house benchmarking datasets, I've notice that …
-
Aspect| Description
-- | --
Challenges | The increasing complexity of models and code raises questions about how code is verified and tested. ICES advice output could be directly impacted by errors…
sjurl updated
10 months ago
-
Hi Xu,
Thanks for your inspiring work.
In Table V. of your paper, you reported the relative localization result of your experiment.
1. How do you calculate the relative localization error? Ki…
-
I would like to evaluate the [pretrained MobileNet model](https://github.com/mlcommons/tiny/blob/master/benchmark/training/visual_wake_words/convert_vww.py) on the preprocessed COCO2014 **test set**, …
-
I think these evaluation methods are much needed. You do not have them now, right?
WithinSubjectEvaluation() - evaluates the performance on all sessions for the same subject
WithinDatasetEvaluati…
-
Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-gen…
-
I am attempting to use DivNet and can't get divnet() to work when my tables are clustered to a higher percentage similarity. I have attempted clustering to various percentage similarities; 80% similar…
-
#23 introduces some differences in the created datasets. The difference is present only in the `Fraction of inspired oxygen` columns (about 50 stays affected). All the differences have the same from: …
-
## ❓ Questions and Help
Hi,
I have two questions relating to storing training/val/test accuracies/losses -
1. In the log file, where I can find mAP values for training, validation, and testing…
-
Hi, I want to train the detector with train_net.py, could you please give me some guide? How to organize the data, and how to pass the parameters. Thanks. @pzzhang