tianrun-chen / SAM-Adapter-PyTorch

Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
MIT License
1.02k stars 90 forks source link

关于训练步骤的请教 #4

Closed littletomatodonkey closed 1 year ago

littletomatodonkey commented 1 year ago

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml
tianrun-chen commented 1 year ago

Greetings! Could you kindly verify if the correct training protocol mentioned in the manuscript has been employed? For our experiment, we have adopted the approach of prior studies and utilized a COMBINED training set, comprising both the CAMO and COD-10K training sets. Our evaluation entails using both the CAMO and COD-10K test sets, taking into account the CAMO's limited training sample. We will also release the pre-trained weights for verification.

zhengziqiang commented 1 year ago

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml

I want to know your training resources. I used 4x3090 but cannot run the experiments.

littletomatodonkey commented 1 year ago

Greetings! Could you kindly verify if the correct training protocol mentioned in the manuscript has been employed? For our experiment, we have adopted the approach of prior studies and utilized a COMBINED training set, comprising both the CAMO and COD-10K training sets. Our evaluation entails using both the CAMO and COD-10K test sets, taking into account the CAMO's limited training sample. We will also release the pre-trained weights for verification.

好的,我仅仅使用了CAMO数据,训练集有1000张图片,这个可能和你的训练步骤没有对齐,多谢~期待开源ckpt用于评估

littletomatodonkey commented 1 year ago

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml

I want to know your training resources. I used 4x3090 but cannot run the experiments.

4xA100(80G),SAM对于显存要求比较高,你可以把bs修改为1

zhengziqiang commented 1 year ago

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml

I want to know your training resources. I used 4x3090 but cannot run the experiments.

4xA100(80G),SAM对于显存要求比较高,你可以把bs修改为1

Thanks for your sharing. I have tried to change the BS to 1, but still cannot work. Now I have changed to only finetune the mask decoder part.

tianrun-chen commented 1 year ago

We use 4xA100 (80G) for training. We have released the pretrained weight! Thanks for your interests!

Jack-bo1220 commented 1 year ago

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml

I want to know your training resources. I used 4x3090 but cannot run the experiments.

4xA100(80G),SAM对于显存要求比较高,你可以把bs修改为1

Thanks for your sharing. I have tried to change the BS to 1, but still cannot work. Now I have changed to only finetune the mask decoder part.

Hi, How can I change to only finetune the mask decoder part? Thanks.