Closed haofanwang closed 2 years ago
BTW, most of links from README cannot work, it would be great to replace them.
P2: AttributeError: 'ConfigDict' object has no attribute 'nms'
A2: This is caused by the conflict between old config and latest version of mmdet/mmcv/mmedit. You can directly modify _YOUR-PATH-TO-MMDET/mmdet/models/dense_heads/rpnhead.py, and add cfg.nms = dict(type='nms', iou_threshold=0.7)
. Possibly, you will get another AttributeError of 'max_per_img', thus, you also need to add cfg.max_per_img = 100
subsequently.
In which line i had to add this???
You can refer to error message where the cfg is used.
dets, _ = batched_nms(proposals, scores, ids, cfg.nms)
Sorry for taking so long to response, did u mean like this or not?
if proposals.numel() > 0:
dets, _ = batched_nms(proposals, scores, ids, cfg.nms = dict(type='nms', iou_threshold=0.7))
else:
return proposals.new_zeros(0, 5)
return dets[:cfg.max_per_img = 100]
I usually set it before calling it, something like below.
cfg.nms = = dict(type='nms', iou_threshold=0.7)
dets, _ = batched_nms(proposals, scores, ids, cfg.nms)
I already add this
if proposals.numel() > 0:
cfg.nms = dict(type='nms', iou_threshold=0.7)
dets, _ = batched_nms(proposals, scores, ids, cfg.nms)
else:
return proposals.new_zeros(0, 5)
return dets[:cfg.max_per_img]
and it says
Process PreprocessConsumer_0: Traceback (most recent call last): File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\preprocess.py", line 80, in run visual=True, File "E:\Deepfake Movement\iPERCore-main\iPERCore\tools\processors\base_preprocessor.py", line 114, in execute self._execute_post_parser(processed_info) File "E:\Deepfake Movement\iPERCore-main\iPERCore\tools\processors\preprocessors.py", line 244, in _execute_post_parser out_img_dir, out_parse_dir, valid_img_names, save_visual=False File "E:\Deepfake Movement\iPERCore-main\iPERCore\tools\human_mattors\point_render_parser.py", line 222, in run has_person, segm_mask, trimap, pred_alpha = self.run_matting(img_path) File "E:\Deepfake Movement\iPERCore-main\iPERCore\tools\human_mattors\point_render_parser.py", line 164, in run_matting has_person, segm_mask, trimap = self.run_detection(img_or_path) File "E:\Deepfake Movement\iPERCore-main\iPERCore\tools\human_mattors\point_render_parser.py", line 112, in run_detection result = inference_detector(self.detection_model, img_path) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\apis\inference.py", line 151, in inference_detector results = model(return_loss=False, rescale=True, data) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\torch\nn\modules\module.py", line 1186, in _call_impl return forward_call(*input, *kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\runner\fp16_utils.py", line 116, in new_func return old_func(args, kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\detectors\base.py", line 174, in forward return self.forward_test(img, img_metas, kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\detectors\base.py", line 147, in forward_test return self.simple_test(imgs[0], img_metas[0], kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\detectors\two_stage.py", line 179, in simple_test proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\dense_heads\dense_test_mixins.py", line 130, in simple_test_rpn proposal_list = self.get_bboxes(rpn_outs, img_metas=img_metas) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\runner\fp16_utils.py", line 205, in new_func return old_func(args, kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\dense_heads\base_dense_head.py", line 105, in get_bboxes kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\dense_heads\rpn_head.py", line 187, in _get_bboxes_single img_shape) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmdet\models\dense_heads\rpn_head.py", line 232, in _bbox_postprocess dets, = batched_nms(proposals, scores, ids, cfg.nms) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\ops\nms.py", line 350, in batched_nms dets, keep = nms_op(boxes_for_nms, scores, *nmscfg) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\utils\misc.py", line 340, in new_func output = old_func(args, **kwargs) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\ops\nms.py", line 176, in nms max_num) File "E:\REPOS\anaconda3\envs\iperc\lib\site-packages\mmcv\ops\nms.py", line 29, in forward bboxes, scores, iou_threshold=float(iou_threshold), offset=offset) TypeError: 'module' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\REPOS\anaconda3\envs\iperc\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\preprocess.py", line 83, in run except Exception("model error!") as e: TypeError: catching classes that do not inherit from BaseException is not allowed Pre-processing: digital deformation start... Process HumanDigitalDeformConsumer_0: Traceback (most recent call last): File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\preprocess.py", line 139, in run prepared_inputs = self.prepare_inputs_for_run_cloth_smpl_links(process_info) File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\preprocess.py", line 213, in prepare_inputs_for_run_cloth_smpl_links src_infos = process_info.convert_to_src_info(self.opt.num_source) File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\options\process_info.py", line 167, in convert_to_src_info src_infos = read_src_infos(self.vid_infos, num_source) File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\options\process_info.py", line 241, in read_src_infos smpls = estimated_smpls[valid_img_info["parse_ids"]] KeyError: 'parse_ids'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\REPOS\anaconda3\envs\iperc\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "E:\Deepfake Movement\iPERCore-main\iPERCore\services\preprocess.py", line 156, in run except Exception("model error!") as e: TypeError: catching classes that do not inherit from BaseException is not allowed Pre-processing: digital deformation completed... the current number of sources are 1, while the pre-defined number of sources are 2. Pre-processing: failed...
(iperc) E:\Deepfake Movement\iPERCore-main>
How do i fix this?
Any answer???
I usually set it before calling it, something like below.
cfg.nms = = dict(type='nms', iou_threshold=0.7) dets, _ = batched_nms(proposals, scores, ids, cfg.nms)
TypeError: 'module' object is not callable
That means there is no answer for this kind of issues?
Please be polite. I just provide my solution which is not official and elegant, but this indeed works for me.
Will not reply anymore. Closed.
It's an interesting project, but it seems to be a little difficult to run the demo code because of environment problem. Thus, I put my problems and final solution here for anyone who meet the same issues.
P1: The provided Colab cannot work.
A1: It has 2 problems. First, the download link to checkpoints and samples are invalid, you can find new download addresses from OneDrive. Second, Colab does not support os.symlink or ln -s now, this will throw an intermediate error.
P2: AttributeError: 'ConfigDict' object has no attribute 'nms'
A2: This is caused by the conflict between old config and latest version of mmdet/mmcv/mmedit. You can directly modify _YOUR-PATH-TO-MMDET/mmdet/models/dense_heads/rpnhead.py, and add
cfg.nms = dict(type='nms', iou_threshold=0.7)
. Possibly, you will get another AttributeError of 'max_per_img', thus, you also need to addcfg.max_per_img = 100
subsequently.P3: RuntimeError: nms is not compiled with GPU support P4: /mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol
A3&A4: Those are all because of mm packages. This repo uses relative old version that may not fit CUDA11. I strongly suggest to install the newest version of them (mmcv == 1.5.3, mmdet == 2.25.0, mmedit == 0.15.0) following official instruction. Especially for mmcv-full, you need install the one that match your local torch&cuda version. If it still cannot detect CUDA, just install it from source instead of pip. After reinstall, you may need also do steps in A2.
P5: KeyError: '04_left_leg'
A5: In _./iPERCore/tools/human_digitalizer/deformers/linkutils.py, change '04_left_leg' to '02_left_leg'.