Open saysth opened 4 years ago
@saysth Hi, did u train ur model from scratch? Can u share ur hyperparam setups? I encounter problems in training from scratch using their default setup.
@SCoulY I just followed the instruction 1. SEAM training: python train_SEAM.py --voc12_root VOC2012 --weights ./xxx.pth --session_name $your_session_name
I seemed to modify the line91 as weights_dict = torch.load(args.weights, map_location='cpu')
@saysth Hi,I also encountered the same problem as you. Have you solved it? How was it resolved?
@li-shuang1997 no, i haven't
I think it may because line68:
thread_pool = pyutils.BatchThreader(_work, list(enumerate(img_list)), batch_size=12, prefetch_size=0, processes=args.num_workers) cam_list = thread_pool.pop_results()
here, the code use threads . it may cause this problem
I have changed it as:
cam_list = [ _work(i[0],i[1]) for i in list(enumerate(img_list))]
I hope it may help you.
@li-shuang1997 no, i haven't
Dear YudeWang, I have successfully trained the model but during the stage of SEAM inference, the code will stop at a random iter(38, 14 or whatever) and don't go on and out_cam(or out_crf) folder won't produce file anymore. How can I solve the exception?