Cogito2012 / UString

[ACM MM 2020] Uncertainty-based Traffic Accident Anticipation
MIT License
67 stars 19 forks source link

where is the *_result.npz file #1

Closed linchunmian closed 4 years ago

linchunmian commented 4 years ago

Hi, thanks for your work. When I run demo.py, it occurs the error message 'no 000821_result.npz file'. I follow your instruction to prepare the dataset and pretrained model, but I really don't find the result file. Please help me! Thanks in advance.

Cogito2012 commented 4 years ago

Hi, thanks for your work. When I run demo.py, it occurs the error message 'no 000821_result.npz file'. I follow your instruction to prepare the dataset and pretrained model, but I really don't find the result file. Please help me! Thanks in advance.

Thanks for your interest in this work! For your issue, you may need to check the run_demo.sh file to see the directory of the *_result.npz and make sure it is already generated there.

linchunmian commented 4 years ago

Thanks for reply. Do I need to run demo.py under inference option to geenrate the result.npz file? Also, If I directly use the extracted feature that you provide, does it mean that I don't need to install mmdetection?

Cogito2012 commented 4 years ago

Thanks for reply. Do I need to run demo.py under inference option to geenrate the result.npz file? Also, If I directly use the extracted feature that you provide, does it mean that I don't need to install mmdetection?

Right, you can find that the feature extraction and inference are two separate stages in the run_demo.sh, which just calls the demo.py. So if you use the extracted feature, you don't need to install mmdetection, and just run the demo.py to generate the result file and visualize it.

linchunmian commented 4 years ago

Thanks. Another problem I consider, if I want to train my own dataset, how should I do? Also, I am quite confused that how to use conda to install mmdetection and py37 virtual env simultaneously? I am sorry I don't get your means and commonly we just create an env to install required packages for the certain project, is it? Many thanks and looking forward to receiving your help!

Cogito2012 commented 4 years ago

Thanks. Another problem I consider, if I want to train my own dataset, how should I do? Also, I am quite confused that how to use conda to install mmdetection and py37 virtual env simultaneously? I am sorry I don't get your means and commonly we just create an env to install required packages for the certain project, is it? Many thanks and looking forward to receiving your help!

To train your own dataset, you may need to write your customized DataLoader class. You can refer to /src/DataLoader.py. Conda can be used to setup any number of virtual envs as long as their names are different, i..e,:

conda create -n py37 python=3.7
conda activate py37
# Then the libs installed by pip are within the py37 env.

conda create -n mmdetection python=3.7
conda activate mmdetection
# Then the libs installed by pip are within the mmdetection env.

This good separation between different envs within the same OS system is exactly why we want to use conda/anaconda. Isn't it?

linchunmian commented 4 years ago

Thanks. Maybe I get your idea. You mean, I could extract data feature in mmdetection env, and perform model inference and visualize in py37 and pytorch1.0 env, is it?

Cogito2012 commented 4 years ago

Thanks. Maybe I get your idea. You mean, I could extract data feature in mmdetection env, and perform model inference and visualize in py37 and pytorch1.0 env, is it?

Not exactly, mmdetection env is only for object detection in my code. Feature extraction, inference, and visualization are all done within py37 env. For convenience, I put all of these together in demo.py. You’d better to check the run_demo.sh, these are not hard to understand.

linchunmian commented 4 years ago

Many thanks, and I would further check the code you mentioned.

monjurulkarim commented 4 years ago

I want to run the demo.py for feature extraction, inference, and visualization. So, I didn't create the mmlab environment. However, when I run the demo.py file it is showed me "No module named "mmdet". So I pip installed mmdet. Then it was showing "AttributeError: module 'mmcv' has no attribute 'version' So, I was trying to pip install mmcv-full. But it is showing " failed to build mmcv-full". How can I solve?

Cogito2012 commented 4 years ago

@monjurulkarim Thanks for your interest in this work! If you need to run demo.py for feature extraction, the mmdetection env mmlab is required. Recently I noticed that official mmdetection and mmcv-full are newer than what I used in this repo. So, to install mmlab env, you need to install mmcv-full==1.1.1, and checkout the tag v1.1.0 for mmdetection source code. You can refer to the following steps, which are also updated to the instructions in the README:

# create python environment
conda create -n mmlab python=3.7

# activate environment
conda activate mmlab

# install dependencies
pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.1.1

# Follow the mmdetection installation instructions
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
git checkout v1.1.0  # important!
cp -r ../Cascade\ R-CNN/* ./  # copy the downloaded files into mmdetection folder

# compile & install
pip install -v -e .
python setup.py install
monjurulkarim commented 4 years ago

Thank you for your help. I have followed your instruction. Now I am getting the following error: image

Do have idea why this is happening? Thanks

Cogito2012 commented 4 years ago

Thank you for your help. I have followed your instruction. Now I am getting the following error: image

Do have idea why this is happening? Thanks

It seems that your installed torch-scatter is not compatible with the other torch libs. You can type pip list | grep torch in py37 env, and see if these libs are correct:

torch                 1.2.0                
torch-cluster         1.4.5                
torch-geometric       1.3.2                
torch-scatter         1.4.0                
torch-sparse          0.4.3                
torchstat             0.0.7                
torchsummaryX         1.3.0                
torchvision           0.4.0a0+6b959ee
monjurulkarim commented 4 years ago

Thanks for your quick response. My torch list are these: image

Is this a problem?

Cogito2012 commented 4 years ago

Thanks for your quick response. My torch list are these: image

Is this a problem?

It should be OK. But if your problem still appears, it could be possibly related to your compiler/OS environment, e.g., gcc/g++. I also noticed that caffe appears in your bug hint, which may provide cues to the solution.

monjurulkarim commented 4 years ago

@Cogito2012 Now I am facing TypeError: init() got an unexpected keyword argument 'num_stages'

The Traceback is the following: (base) mmoniruzzama@u108100:~/Monjurul/UString$ bash run_demo.sh demo/000821.mp4 Run feature extraction... Traceback (most recent call last): File "demo.py", line 331, in detector = init_detector(cfg_file, model_file, device=device) File "/home/stufs1/mmoniruzzama/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmdet-2.4.0-py3.7.egg/mmdet/apis/inference.py", line 34, in init_detector model = build_detector(config.model, test_cfg=config.test_cfg) File "/home/stufs1/mmoniruzzama/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmdet-2.4.0-py3.7.egg/mmdet/models/builder.py", line 67, in build_detector return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/home/stufs1/mmoniruzzama/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmdet-2.4.0-py3.7.egg/mmdet/models/builder.py", line 32, in build return build_from_cfg(cfg, registry, default_args) File "/home/stufs1/mmoniruzzama/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 167, in build_from_cfg return obj_cls(args) TypeError: init() got an unexpected keyword argument 'num_stages' Saved in: demo/000821_feature.npz Run accident inference... Traceback (most recent call last): File "demo.py", line 339, in from src.Models import UString File "/data/home/stufs1/mmoniruzzama/Monjurul/UString/src/Models.py", line 10, in from torch_geometric.utils import remove_self_loops, add_self_loops File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/torch_geometric/init.py", line 5, in import torch_geometric.transforms File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/torch_geometric/transforms/init.py", line 37, in from .gdc import GDC File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/torch_geometric/transforms/gdc.py", line 2, in import numba File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/init.py", line 45, in import numba.typed File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/typed/init.py", line 3, in from .typeddict import Dict File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/typed/typeddict.py", line 18, in @njit File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/decorators.py", line 224, in njit return jit(args, kws) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/decorators.py", line 161, in jit return wrapper(pyfunc) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/decorators.py", line 177, in wrapper dispatcher_args) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/dispatcher.py", line 576, in init self.targetctx = self.targetdescr.target_context File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/registry.py", line 50, in target_context return self._toplevel_target_context File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/utils.py", line 381, in get res = instance.dict[self.name] = self.func(instance) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/registry.py", line 34, in _toplevel_target_context return cpu.CPUContext(self.typing_context) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/base.py", line 250, in init self.init() File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/compiler_lock.py", line 32, in _acquire_compile_lock return func(args, kwargs) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/cpu.py", line 49, in init self._internal_codegen = codegen.JITCPUCodegen("numba.exec") File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/codegen.py", line 612, in init self._init(self._llvm_module) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numba/targets/codegen.py", line 621, in _init tm = target.create_target_machine(**tm_options) TypeError: create_target_machine() got an unexpected keyword argument 'jitdebug' Saved in: demo/000821_result.npz Run result visualization... Traceback (most recent call last): File "demo.py", line 353, in all_results = np.load(p.result_file, allow_pickle=True) File "/home/stufs1/mmoniruzzama/anaconda3/envs/py37/lib/python3.7/site-packages/numpy/lib/npyio.py", line 422, in load fid = open(os_fspath(file), "rb") FileNotFoundError: [Errno 2] No such file or directory: 'demo/000821_result.npz' Saved in: demo/000821_vis.avi

Cogito2012 commented 4 years ago

@monjurulkarim It seems that you are using an incorrect version of mmdetection. This repo currently only supports for mmdetection==1.1.0. Please make sure you are following every step in README instruction.

monjurulkarim commented 4 years ago

@Cogito2012 Thank you for your reply. My CUDA version is 10.2. Seems like mmdetection==1.1.0 will not work on CUDA 10.2.

Cogito2012 commented 4 years ago

@monjurulkarim Thanks for reporting this issue. There are some alternatives you may consider:

As I'm busy recently, I will try to update this repo to support the latest mmdetection and CUDA in the future. This will be not easy as those torch-related packages are dependent on lower version of pytorch, which is not compatible with CUDA 10.2 or higher version. You can also do it by yourself if you are interested. :-)

monjurulkarim commented 4 years ago

@Cogito2012 Thank you very much for your help. Option # 1 worked for me.

monjurulkarim commented 4 years ago

@Cogito2012 For the option # 2, if I want to detect object and extract features with another pre-trained object detector (eg. faster rcnn/ Yolo v3, etc.) from mmdetection where do I have to make changes?

Cogito2012 commented 4 years ago

@monjurulkarim You will just need to use your pretrained detector to get bounding boxes by changing cfg_file and model_file in line 330, if you use mmdetection.