82magnolia / n_imagenet

Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)
GNU General Public License v3.0
49 stars 5 forks source link

some thing wrong when i run the code #2

Open Derekerrr opened 1 year ago

Derekerrr commented 1 year ago

image it happens when i run the code, i can not figure it out

82magnolia commented 1 year ago

Hi, could you specify the command you used before obtaining the error message?

Derekerrr commented 1 year ago

Thank you for answering my questions。

the situation is:

i created the conda environment and downloaded source code as the github readme file tells on my windows laptop with nvidia 3060 GPU.

i also downloaded the mini-N_imagenet dataset(around 40 GB size)  and changed the path in the train_list.txt and val_list.txt. i thought everything was ok. but when i run the code with the command below:       python main.py --config configs/imagenet/cnn_adam_acc_two_channel_big_kernel_random_idx_mini.ini

an error happpened:

below is the python console message:

slice_augment_width=0 slice_start=0 slice_end=30000 [Train] optimizer=Adam epochs=100 save_every=1 learning_rate=0.0003 momentum=0.9 weight_decay=1e-4 temperature=1 max_trajectory_speed=0.0 mahalanobis=0.0 noise_inject=True [Debug] debug=False debug_input=True inspect_channel=all If config is correct, press y: >? y Initializing data container ImageNetContainer... Initializing model container CNNContainer... E:\conda\envs\e2t\lib\site-packages\torchvision\models_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.   f"The parameter '{pretrained_param}' is deprecated since 0.13 and may be removed in the future, " E:\conda\envs\e2t\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None.   warnings.warn(msg) Using ResNet34 for training Initializing trainer CNNTrainer... Using Adam as optimizer Number of GPUs: 1 Number of model parameters in model: 21351652 Model size of model: 81 MB ResNet(   (conv1): Conv2d(2, 64, kernel_size=(14, 14), stride=(2, 2), padding=(3, 3), bias=False)   (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)   (relu): ReLU(inplace=True)   (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)   (layer1): Sequential(     (0): BasicBlock(       (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (1): BasicBlock(       (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (2): BasicBlock(       (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )   )   (layer2): Sequential(     (0): BasicBlock(       (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (downsample): Sequential(         (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)         (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       )     )     (1): BasicBlock(       (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (2): BasicBlock(       (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (3): BasicBlock(       (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )   )   (layer3): Sequential(     (0): BasicBlock(       (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (downsample): Sequential(         (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)         (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       )     )     (1): BasicBlock(       (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (2): BasicBlock(       (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (3): BasicBlock(       (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (4): BasicBlock(       (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (5): BasicBlock(       (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )   )   (layer4): Sequential(     (0): BasicBlock(       (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (downsample): Sequential(         (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)         (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       )     )     (1): BasicBlock(       (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )     (2): BasicBlock(       (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)       (relu): ReLU(inplace=True)       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)     )   )   (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))   (fc): Linear(in_features=512, out_features=100, bias=True) ) This is 1-th epoch. Traceback (most recent call last):   File "E:\conda\envs\e2t\lib\code.py", line 90, in runcode     exec(code, self.locals)   File "<input>", line 1, in <module>   File "C:\Program Files\JetBrains\PyCharm 2022.3\plugins\python\helpers\pydev_pydev_bundle\pydev_umd.py", line 198, in runfile     pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script   File "C:\Program Files\JetBrains\PyCharm 2022.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile     exec(compile(contents+"\n", file, 'exec'), glob, loc)   File "E:\my_code\n_imagenet-main\n_imagenet-main\real_cnn_model\main.py", line 94, in <module>     main()   File "E:\my_code\n_imagenet-main\n_imagenet-main\real_cnn_model\main.py", line 90, in main     trainer.run()   File "E:\my_code\n_imagenet-main\n_imagenet-main\base\train\common_trainer.py", line 68, in run     self.run_epoch()   File "E:\my_code\n_imagenet-main\n_imagenet-main\base\train\common_trainer.py", line 87, in run_epoch     self.train_epoch()   File "E:\my_code\n_imagenet-main\n_imagenet-main\base\train\common_trainer.py", line 141, in train_epoch     for batch_idx, data_dict in enumerate(self.data_container.dataloader['train']):   File "E:\conda\envs\e2t\lib\site-packages\torch\utils\data\dataloader.py", line 435, in iter     return self._get_iterator()   File "E:\conda\envs\e2t\lib\site-packages\torch\utils\data\dataloader.py", line 381, in _get_iterator     return _MultiProcessingDataLoaderIter(self)   File "E:\conda\envs\e2t\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in init     w.start()   File "E:\conda\envs\e2t\lib\multiprocessing\process.py", line 112, in start     self._popen = self._Popen(self)   File "E:\conda\envs\e2t\lib\multiprocessing\context.py", line 223, in _Popen     return _default_context.get_context().Process._Popen(process_obj)   File "E:\conda\envs\e2t\lib\multiprocessing\context.py", line 322, in _Popen     return Popen(process_obj)   File "E:\conda\envs\e2t\lib\multiprocessing\popen_spawn_win32.py", line 89, in init     reduction.dump(process_obj, to_child)   File "E:\conda\envs\e2t\lib\multiprocessing\reduction.py", line 60, in dump     ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <class 'base.utils.parse_utils.Config'>: attribute lookup Config on base.utils.parse_utils failed Traceback (most recent call last):   File "<string>", line 1, in <module>   File "E:\conda\envs\e2t\lib\multiprocessing\spawn.py", line 105, in spawn_main     exitcode = _main(fd)   File "E:\conda\envs\e2t\lib\multiprocessing\spawn.py", line 115, in _main     self = reduction.pickle.load(from_parent) EOFError: Ran out of input

吴汉超 @.***

 

------------------ 原始邮件 ------------------ 发件人: "82magnolia/n_imagenet" @.>; 发送时间: 2023年4月4日(星期二) 中午12:18 @.>; @.**@.>; 主题: Re: [82magnolia/n_imagenet] some thing wrong when i run the code (Issue #2)

Hi, could you specify the command you used before obtaining the error message?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>