-
Comparing with the planning evaluation numbers of table 1 in your paper, trained mini or small models from my side give lower evaluation numbers as shown in the below table.
I use the data which yo…
-
Hi,
I'm running the training code for training cityscapes and eval on acdc. I encountered some problems:
1.For warm up training, i found the training is not corrected conducted, it directly jump…
-
I know that to run run_celebahq.py, I need to write `python3 scripts/run_celebahq.py train --bucket_name_prefix $BUCKET_PREFIX --exp_name $EXPERIMENT_NAME --tpu_name $TPU_NAME` or `python3 scripts/run…
-
Before #334, we were only supporting evaluating the model on a single fixed evaluation dataset.
In #334, we introduce a evaluation matrix in which we evaluate each trained model on each trigger tra…
-
感谢您公开的代码,我在IAPR-TC12数据集上测试时遇到了一些问题,采用以下配置得到了Img2Txt: 0.471749 Txt2Img: 0.468014 Avg: 0.469881准确率,请问我的参数设置是否与您的有什么不同吗?谢谢
模型配置:
parser.add_argument("--data_name", type=str, default="iapr…
-
Hi, team. Thanks for releasing the exceptional work.
I try to evaluate the released model (llava-v1.5) on town05 long benchmark (with leaderboard/data/evaluation_routes/routes_town05_long.xml), and s…
-
### Issue Type
Feature Request
### Source
pip (model-compression-toolkit)
### MCT Version
1.11.0
### OS Platform and Distribution
ubuntu 20.04.1
### Python version
3.10
### Describe the iss…
-
Could you provide more detailed instructions for running the model? Thank you!
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
**Describe the bug**
I'm looking a way to execute a model depending if input is set or not (e.g. using conditional branch). But seems that expression is not correctly evaluated and is evaluated in a …