Closed WangHeng1021 closed 6 months ago
the same problem
the same problem
Hi, I previously encountered the same issue. The following was my solution:
Check if you have accidentally removed/commented the following lines under configs/default.yaml
. It should be present for the model to be wrapped in fp16.
fp16:
loss_scale:
growth_interval: 2000
Alternatively, in the python script that you are executing, print out cfg['fp16']
. It shouldn't be None
. If it is None
, add in the above lines in configs/default.yaml
.
Hope it helps!
Hi, I previously encountered the same issue. The following was my solution:
Check if you have accidentally removed/commented the following lines under
configs/default.yaml
. It should be present for the model to be wrapped in fp16.fp16: loss_scale: growth_interval: 2000
Alternatively, in the python script that you are executing, print out
cfg['fp16']
. It shouldn't beNone
. If it isNone
, add in the above lines inconfigs/default.yaml
.Hope it helps!
I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.
Have you found a solution to this problem? I am having the same problem.
Hi, I previously encountered the same issue. The following was my solution: Check if you have accidentally removed/commented the following lines under
configs/default.yaml
. It should be present for the model to be wrapped in fp16.fp16: loss_scale: growth_interval: 2000
Alternatively, in the python script that you are executing, print out
cfg['fp16']
. It shouldn't beNone
. If it isNone
, add in the above lines inconfigs/default.yaml
. Hope it helps!I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.
Have you found a solution to this problem? I still encountered this problem with completely right configs/default.yaml
Have you found a solution to this problem? I still have the same problem.
Have you found a solution to this problem? I am having the same problem.
Hi, I previously encountered the same issue. The following was my solution: Check if you have accidentally removed/commented the following lines under
configs/default.yaml
. It should be present for the model to be wrapped in fp16.fp16: loss_scale: growth_interval: 2000
Alternatively, in the python script that you are executing, print out
cfg['fp16']
. It shouldn't beNone
. If it isNone
, add in the above lines inconfigs/default.yaml
. Hope it helps!I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.
Have you found a solution to this problem? I still have the same problem.
Have you found a solution to this problem? I still have the same problem.
if args.mode == "pred":
model = build_model(cfg.model)
load_checkpoint(model, args.checkpoint, map_location="cpu")
modify to:
if args.mode == "pred":
model = build_model(cfg.model)
fp16_cfg = cfg.get("fp16", None)
if fp16_cfg is not None:
wrap_fp16_model(model)
load_checkpoint(model, args.checkpoint, map_location="cpu")
I solved the problem by change the code in /mmdet3d/models/vtransforms/base.py ,delete mats_dict 2 place
line 349 x = self.get_cam_feats(img, depth, mats_dict)
change to x = self.get_cam_feats(img, depth)
and line 222 around
it will be ok. but i dont know the effect of the changing.
i solve the question by the method: mmdet3d>models>vtransforms>base.py 350 line x = self.get_cam_feats(img, depth, mats_dict) modify to x = self.get_cam_feats(img, depth), then run you python script
wrap_fp16_model
where is wrap_fp16_model()?
wrap_fp16_model
where is wrap_fp16_model()? add this at top of file to import the related library: from mmcv.runner import wrap_fp16_model
Hello [User's Name or Team's Name],
I hope this message finds you well. First and foremost, I want to express my appreciation for your work and contributions to this project. When I tried to visualize the results of my training by the following command, I ran into the following error. Could you please help me with it?
torchpack dist-run -np 1 python tools/visualize.py train_result/configs.yaml --mode pred --checkpoint train_result/latest.pth --bbox-score 0.2 --out-dir vis_result
/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined] 2023-10-18 12:23:02,407 - mmdet - INFO - load checkpoint from local path: pretrained/swint-nuimages-pretrained.pth load checkpoint from local path: train_result/latest.pth ^-^ 0%| | 0/81 [00:01<?, ?it/s] Traceback (most recent call last): File "tools/visualize.py", line 167, in
main()
File "tools/visualize.py", line 89, in main
outputs = model( data)
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, kwargs)
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], *kwargs[0])
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, kwargs)
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, kwargs)
File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 253, in forward
outputs = self.forward_single(
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, *kwargs)
File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 301, in forward_single
feature = self.extract_camera_features(
File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 133, in extract_camera_features
x = self.encoders["camera"]["vtransform"](
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, kwargs)
File "/home/qqq/wh/bevfusion/mmdet3d/models/vtransforms/depth_lss.py", line 100, in forward
x = super().forward(*args, kwargs)
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, *kwargs)
File "/home/qqq/wh/bevfusion/mmdet3d/models/vtransforms/base.py", line 350, in forward
x = self.get_cam_feats(img, depth, mats_dict)
File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(args, kwargs)
TypeError: get_cam_feats() takes 3 positional arguments but 4 were given