vpulab / Semantic-Aware-Scene-Recognition

Code repository for paper https://www.sciencedirect.com/science/article/pii/S0031320320300613 @ Pattern Recognition 2020
MIT License
102 stars 17 forks source link

train the SASceneNet on MITIndoor67Dataset #6

Open zhangtongxue1994 opened 4 years ago

zhangtongxue1994 commented 4 years ago

I tried to train the SASceneNet on MITIndoor67Dataset but encountered some problems,may I have your training codes? Thank you very much~~

JiahangWu commented 4 years ago

May I ask some questions about this project? Is my GUP memory is not big enough? Why I confront with CUDA out of memory, when I run this project on every dataset. Plus, I cannot cancel the pre-compute, even I modify the config that make precompute_sem from true to false. Could you give me some solutions? Thanks.

zhangtongxue1994 commented 4 years ago

同学你好,是运行evaluation.py测试的时候显存不足?你显存多大?

------------------ Original ------------------ From: "吴佳航"<notifications@github.com>; Date: 2020年3月7日(星期六) 上午10:37 To: "vpulab/Semantic-Aware-Scene-Recognition"<Semantic-Aware-Scene-Recognition@noreply.github.com>; Cc: "zhang"<1620009136@qq.com>; "Author"<author@noreply.github.com>; Subject: Re: [vpulab/Semantic-Aware-Scene-Recognition] train the SASceneNet on MITIndoor67Dataset (#6)

May I ask some questions about this project? Is my GUP memory is not big enough? Why I confront with CUDA out of memory, when I run this project on every dataset. Plus, I cannot cancel the pre-compute, even I modify the config that make precompute_sem from true to false. Could you give me some solutions? Thanks.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

JiahangWu commented 4 years ago

同学你好,是运行evaluation.py测试的时候显存不足?你显存多大? ------------------ Original ------------------ From: "吴佳航"<notifications@github.com>; Date: 2020年3月7日(星期六) 上午10:37 To: "vpulab/Semantic-Aware-Scene-Recognition"<Semantic-Aware-Scene-Recognition@noreply.github.com>; Cc: "zhang"<1620009136@qq.com>; "Author"<author@noreply.github.com>; Subject: Re: [vpulab/Semantic-Aware-Scene-Recognition] train the SASceneNet on MITIndoor67Dataset (#6) May I ask some questions about this project? Is my GUP memory is not big enough? Why I confront with CUDA out of memory, when I run this project on every dataset. Plus, I cannot cancel the pre-compute, even I modify the config that make precompute_sem from true to false. Could you give me some solutions? Thanks. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

RTX 2060 6G的 我报的错是

Traceback (most recent call last):
  File "evaluation.py", line 302, in <module>
    val_top1, val_top2, val_top5, val_loss, val_ClassTPDic = evaluationDataLoader(val_loader, model, set='Validation')
  File "evaluation.py", line 77, in evaluationDataLoader
    outputSceneLabel, feature_conv, outputSceneLabelRGB, outputSceneLabelSEM = model(RGB_image, semanticTensor)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lollipop/Semantic-Aware-Scene-Recognition/SASceneNet.py", line 165, in forward
    e1 = self.encoder1(x)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torchvision/models/resnet.py", line 88, in forward
    residual = self.downsample(x)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 76, in forward
    exponential_average_factor, self.eps)
  File "/home/lollipop/anaconda3/envs/SA-Scene-Recognition/lib/python3.7/site-packages/torch/nn/functional.py", line 1623, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 306.25 MiB (GPU 0; 5.79 GiB total capacity; 4.35 GiB already allocated; 198.94 MiB free; 77.83 MiB cached)
zhangtongxue1994 commented 4 years ago

可以加Q1620009136,帮你看一下

zhangtongxue1994 commented 4 years ago

应该是config设置的问题,Q1620009136,抽空可以帮你看一下

------------------ Original ------------------ From: "吴佳航"<notifications@github.com>; Date: 2020年3月7日(星期六) 中午11:14 To: "vpulab/Semantic-Aware-Scene-Recognition"<Semantic-Aware-Scene-Recognition@noreply.github.com>; Cc: "zhang"<1620009136@qq.com>; "Author"<author@noreply.github.com>; Subject: Re: [vpulab/Semantic-Aware-Scene-Recognition] train the SASceneNet on MITIndoor67Dataset (#6)

同学你好,是运行evaluation.py测试的时候显存不足?你显存多大? … ------------------ Original ------------------ From: "吴佳航"<notifications@github.com>; Date: 2020年3月7日(星期六) 上午10:37 To: "vpulab/Semantic-Aware-Scene-Recognition"<Semantic-Aware-Scene-Recognition@noreply.github.com>; Cc: "zhang"<1620009136@qq.com>; "Author"<author@noreply.github.com>; Subject: Re: [vpulab/Semantic-Aware-Scene-Recognition] train the SASceneNet on MITIndoor67Dataset (#6) May I ask some questions about this project? Is my GUP memory is not big enough? Why I confront with CUDA out of memory, when I run this project on every dataset. Plus, I cannot cancel the pre-compute, even I modify the config that make precompute_sem from true to false. Could you give me some solutions? Thanks. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

RTX 2060 6G的

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

keaixiaovv commented 4 years ago

数据集外网下载太慢,同学们怎么解决这个问题的。

JiahangWu commented 4 years ago

数据集外网下载太慢,同学们怎么解决这个问题的。

挂个vpn下载吧