MingtaoFu / gliding_vertex

The implementation of paper "Gliding vertex on the horizontal bounding box for multi-oriented object detection".
271 stars 63 forks source link

No module named 'dota_utils' #4

Closed aimuch closed 4 years ago

aimuch commented 4 years ago

Hi, when I run python prepare.py, there is error:

ModuleNotFoundError: No module named 'dota_utils'
MingtaoFu commented 4 years ago

It should be caused by importing a package in DOTA_devkit. Try to add this folder into PYTHONPATH:

cd maskrcnn_benchmark/DOTA_devkit
export PYTHONPATH=$PYTHONPATH:`pwd`
aimuch commented 4 years ago

@MingtaoFu Thx, It works.

echo $PYTHONPATH
$ /root/gliding_vertex/maskrcnn_benchmark/:/root/gliding_vertex/maskrcnn_benchmark/DOTA_devkit/

But, there is another error, when I run python -m torch.distributed.launch --nproc_per_node=3 tools/train_net.py --config-file configs/glide/dota.yaml:

ModuleNotFoundError: No module named 'maskrcnn_benchmark'
MingtaoFu commented 4 years ago

Could you please post the complete error information?

aimuch commented 4 years ago

Could you please post the complete error information?

(pytorch) root@e62aafd8a04c:~/gliding_vertex# python -m torch.distributed.launch --nproc_per_node=1 tools/train_net.py --config-file configs/glide/dota.yaml configs/glide/dota.yaml Traceback (most recent call last): File "tools/train_net.py", line 206, in main() File "tools/train_net.py", line 173, in main cfg.merge_from_file(args.config_file) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/yacs/config.py", line 213, in merge_from_file self.merge_from_other_cfg(cfg) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/yacs/config.py", line 217, in merge_from_other_cfg _merge_a_into_b(cfg_other, self, self, []) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/yacs/config.py", line 460, in _merge_a_into_b _merge_a_into_b(v, b[k], root, key_list + [k]) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/yacs/config.py", line 473, in _merge_a_into_b raise KeyError("Non-existent config key: {}".format(full_key)) KeyError: 'Non-existent config key: INPUT.RANDOM_ROTATE_ON' Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/root/anaconda3/envs/pytorch/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/distributed/launch.py", line 235, in main() File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/distributed/launch.py", line 231, in main cmd=process.args) subprocess.CalledProcessError: Command '['/root/anaconda3/envs/pytorch/bin/python', '-u', 'tools/train_net.py', '--local_rank=0', '--config-file', 'configs/glide/dota.yaml']' returned non-zero exit status 1.

MingtaoFu commented 4 years ago

I have re-cloned this project and checked it. The line cfg.merge_from_file(args.config_file) runs normally. Have you modified some parts of this project? Generally, this problem occurs when the key in dota.yaml is not in gliding_vertex/maskrcnn_benchmark/config/defaults.py. Please ensure that INPUT.RANDOM_ROTATE_ON is in the file.

aimuch commented 4 years ago

I have re-cloned this project and checked it. The line cfg.merge_from_file(args.config_file) runs normally. Have you modified some parts of this project? Generally, this problem occurs when the key in dota.yaml is not in gliding_vertex/maskrcnn_benchmark/config/defaults.py. Please ensure that INPUT.RANDOM_ROTATE_ON is in the file.

Thx, It works. My GPU only has 8 G, I try change IMS_PER_BATCH to small or change input size。 It still reminds that out of memory of GPU.

xs-trinity-lwei commented 4 years ago

You can try to change the parameter: ‘numworkers’

MingtaoFu commented 4 years ago

@aimuch @xs-trinity-lwei I think it is not caused by num_workers because it only affect the speed of loading data. Empirically, I think it is caused by the too large IoU matrix. In fact, if you refer to the issues of Maskrcnn_benchmark, you can see that many people suffer from it. We faced this problem, too, even if we used titan xp with 12GB memory. To address this problem, we decompose the computation of IoU matrix. You can find the snippets in related Python files. There is a variable that controls the decomposition grain, try to tune it.