zilongzhong / SSRN

This is a tensorflow and keras based implementation of SSRNs in the IEEE T-GRS paper "Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework".
Other
212 stars 74 forks source link

typeError :'nonType error is not iterble' #4

Open BlcaKHat opened 6 years ago

BlcaKHat commented 6 years ago

zilong

got this error while python ./SSRN_IN.py

zilongzhong commented 6 years ago

This message means the mat data hasn't been read, please check whether .mat data is in the right directory.

BlcaKHat commented 6 years ago

The path is correct as i think. directory

zilongzhong commented 6 years ago

In case the data have been contaminated, you can download them directly from this link: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Indian_Pines Another suggestion is to use python console (in IDE like PyCharm) to test and debug this snippet.

BlcaKHat commented 6 years ago

I will try..

BlcaKHat commented 6 years ago

longerror

It processes for a long time then It thrown this error..

zilongzhong commented 6 years ago

I used GTX 980M for this paper, it should not train for a long time if you use a relatively new GPU. The presented error is again related to the path, check it and change to your absolute path. Good luck.

BlcaKHat commented 6 years ago

@zilongzhong it's talking a lot of time If I calculate, it will take around 400 hours to complete. what should i do.. train

BlcaKHat commented 6 years ago

I can convert my image to array . can you help me where I should put that array in code.

BlcaKHat commented 6 years ago

Is there a way to reduce the training time ? what is TRAIN_SIZE 2055 and nb_epoch 200 .

zilongzhong commented 6 years ago

Make sure you have install Keras, which could be a problem leading to low training rate, you can install it in your Anaconda envinronment by: conda install -c conda-forge keras

zilongzhong commented 6 years ago

TRAIN_SIZE = 2055 means the size of training samples and nb_epoch = 200 means the training process go over all training data for 200 times.

zilongzhong commented 6 years ago

I will update the repo using the latest Tensorflow and Keras version, and add a function to read HSI data. Stay tuned.