-
Hi, thanks for providing detailed evaluation scripts.
I have a small question, when we evaluate models on bridge environment, how can we calculate the success rate of grasping objects like spoon an…
-
-
hello, nice to meet you.
I have a trouble on evaluating your project.
when experiments/experiment.jl script on Github repo is run, there are some run-time issues.
I want to get the running Julia sc…
-
Thanks for providing the code.
Could you provide the BLEU script that you use for benchmark results in your paper. I am not able to reproduce the BLEU score of 56.67 (for Multimodal HRED (2) that …
-
Hi!
Firstly, thanks for answering the other issue of mine in time.
And I am also confused by your E2E_TOD evaluation scripts. Function queryJsons() in dp_ops.py seems will return all of the names of…
-
Hi,
I have few questions regarding Carla results.
1) For the paper, have you trained VAD on Carla and then reported the evaluation results or used your open loop checkpoint (pretrained on nuscen…
-
### System Info
```Shell
For all accelerate version
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folde…
-
### System Info
When running training and evaluation (`_inner_training_loop` in the HF `Trainer`), the auto-find-batch size tries to reduce the training batch size, even when the OOM happens during e…
-
Hi,
Thanks for uploading the code, I tried to run the evaluation script by following the readme:
python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cm…
-
Hi, thank you for your work!
I’m particularly interested in the evaluation metrics mentioned in the paper. I was wondering if there is any available code or scripts to reproduce the evaluation metr…