YanchaoYang / FDA

Fourier Domain Adaptation for Semantic Segmentation
491 stars 79 forks source link

GPU memory error #28

Closed timswim closed 3 years ago

timswim commented 3 years ago

Hi, thanks for your work. Running train.py gives error :

/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/my_env/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead. warnings.warn(warning.format(ret)) Traceback (most recent call last): File "/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/train.py", line 135, in main() File "/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/train.py", line 94, in main trg_seg_score = model(trg_img, lbl=trg_lbl, weight=class_weights, ita=args.ita) # forward pass File "/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/my_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/model/deeplab.py", line 181, in forward self.loss_seg = self.CrossEntropy2d(x, lbl, weight=weight) File "/media/data/ObjectDetectionExperiments/Projects/3_SemanticSegment/FDA-master/model/deeplab.py", line 237, in CrossEntropy2d predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) RuntimeError: CUDA error: out of memory

If i use in data/init.py image size = (512, 256), train is running without error and take ~5Gb GPU. Why it happen?

timswim commented 3 years ago

Hmm, found the upper limit of the resolution, it is 1000x 512. Lack of + -100MB. Excuse for troubling. P.S. I use 1080Ti with 11Gb too.