mitmul / ssai-cnn

Semantic Segmentation for Aerial / Satellite Images with Convolutional Neural Networks including an unofficial implementation of Volodymyr Mnih's methods
http://www.ingentaconnect.com/content/ist/jist/2016/00000060/00000001/art00003
MIT License
260 stars 75 forks source link

Possible Issue with train data documentation #7

Open Skylion007 opened 8 years ago

Skylion007 commented 8 years ago

So I finally got the dataset created and have begun trying to train it. Unfortunately, I keep getting this error.

Traceback (most recent call last): File "scripts/train.py", line 313, in model, optimizer = one_epoch(args, model, optimizer, epoch, True) File "scripts/train.py", line 288, in one_epoch 'epoch:{}\ttrain loss:{}'.format(epoch, sum_loss / num)) ZeroDivisionError: division by zero

Additionally, the train_batch script doesn't make any sense because you are using the same variable for the seed and GPU meaning that the latter calls would assume you have 8 GPUs on your machine which seems a little absurd? Or is that just the case and I am misreading the script?

Skylion007 commented 8 years ago

I narrowed down the issue. Apparently, it's prematurely terminating due to the if statement on line 199. I don't know why this would occur. The LMDBs appear to be generated properly and the shells/test_dataset.sh appears to complete just fine.

Skylion007 commented 8 years ago

Found the bug. It has to do with the create_dataset script. The dtype of keys is improperly set to 'b' when you probably meant to set it to binary. This makes the LMDB write to the same key over and over again and makes so that it only stores 256 values which isn't enough for training.