cv516Buaa / ST-DASegNet

Apache License 2.0
25 stars 4 forks source link

memory and batch #2

Open scucmpunk opened 1 year ago

scucmpunk commented 1 year ago

Hi, Author. Can you tell me how big the video memory needs to be and if 'samples_per_gpu=1' won't work?

cv516Buaa commented 1 year ago

It depends on the selected models, input image size and your device. You can select a model, run a test script and adjust the resize ratio to see whether out of memory.

scucmpunk commented 1 year ago

In LoveDA, data=dict (samples_per_gpu=?, workers_per_gpu=?), you know how appropriate they should be set, and the error "ValueError: Expected more than 1 value per channel when training, get input size torch.Size ([1, 512, 1, 1])" will appear.

cv516Buaa commented 1 year ago

It may not result from 'samples_per_gpu=1'. Please check the input size of images.

scucmpunk commented 1 year ago

img_scale=(1024, 1024) crop_size = (384, 384)

cv516Buaa commented 1 year ago

Yeah,alright,are you training models on single gpu device?

cv516Buaa commented 1 year ago

If batch size has to be >1,I suggest that you should go to mmsegmentation. There may have related issues.

scucmpunk commented 1 year ago

yes