Pancakerr / HSIC-platform

A integrated deep learning platform for hyperspectral classification by pytorch
GNU General Public License v3.0
8 stars 1 forks source link

Some problems about data splitting #2

Closed 165412152 closed 2 weeks ago

165412152 commented 2 weeks ago

On the IP dataset, the default training set division for Unet is train samples: 1, val samples: 1, test samples: 1. For SSUN, it is train samples: 1660, val samples: 646, test samples: 21025. I want the same data division for Unet as for SSUN. I used the SAMPLE_MODE PSW, but encountered an error. What kind of data loading method should I use? Looking forward to your reply.

INSYangCL commented 2 weeks ago

This is author speaking. Since Unet is a fully conv network, the IP dataset is seen as one image sample, so train sample number is 1 for Unet. But SSUN is a patchwise model, whose sample mode is set to PWS, which treat the IP dataset by small patch. otherwise SSUN is a special model whose training dataset is "dualset", my repo does not support "dualset" for Unet training. However if you want to train Unet with normal patchwise dataset. You can use "--SAMPLE_MODE PWS --PATCH_SIZE you_patch_size ". remember patchsize must big enough as for Unet would make several downsampling.

165412152 commented 2 weeks ago

Thank you very much for your answer