Closed xwhkkk closed 1 year ago
By default, the training goes from stage 0->3. The provided model is in stage 3, and the existing code does not support "going back" to stage 0. Basically the first conv size will not match. Look at this method https://github.com/hkchengrex/Mask-Propagation/blob/ec9309f04ae3c98edb2eb5675c937a699d80006f/model/model.py#L188 if you want to change it.
Thanks for your reply! Expect your mentioned method, could I train my personal image cloud dataset in stage 0 and follow your next main video training?
If I understand you correctly, you can just not load the pretrained model.
I don't know what your data looks like but I think you need a strong temporal smoothness prior to segment clouds which our model does not provide.
I want to solve some special object segmentation problems, such as smoke and cloud. but these kinds of datasets are lacking in the DAVIS dataset, so I want to add them the stage 0. The data looks like this.
Hello ! I want to train the PropagationNetwork on my personal image dataset, so I use the training command
CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s01 --load_network ./saves/propagation_model.pth --stage 0
.(based on the pretrain model S012). It threw a runtime error.The training command works fine without the
--load network
parameters. Could you give me some suggestions?