akarshzingade / image-similarity-deep-ranking

369 stars 103 forks source link

ValueError: Error when checking target: expected lambda_4 to have shape (None, 4096) but got array with shape (24, 4) #7

Closed visheshmistry closed 6 years ago

visheshmistry commented 6 years ago

I have a dataset with 4 classes. I have generated the triplets.txt file with these 4 classes.

However, when I run the deepRanking.py file, I get the following error:

Traceback (most recent call last): File "deepRanking.py", line 152, in <module> epochs=train_epocs File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 2114, in fit_generator class_weight=class_weight) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 1826, in train_on_batch check_batch_axis=True) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 1411, in _standardize_user_data exception_prefix='target') File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 153, in _standardize_input_data str(array.shape)) ValueError: Error when checking target: expected lambda_4 to have shape (None, 4096) but got array with shape (24, 4)

Need help please!

visheshmistry commented 6 years ago

I know the (24,4) is because I have 24 as the batch size and 4 as the number of classes. But then the last dense layer has 4096 neutrons. So why does this error occur?

akarshzingade commented 6 years ago

This is weird. Not sure why it would output 24x2 array. Would it be possible for you to share your dataset? I would like to locally replicate the issue.

visheshmistry commented 6 years ago

Hi Akarsh!

I have a dataset of 4 classes: a (576 images), b (576 images), c (320 images), d (400 images).

You can find it here - https://drive.google.com/drive/folders/1WbwXicmqDlRsp71sHGZbGKTv1bMB5MU_?usp=sharing

Also, I generated the triplet.txt file with the command having the number of positive images as 10 and negative images as 10.

akarshzingade commented 6 years ago

Hey, Vishesh. It seems to be working fine for me. The folder structure looks like this:

image

and the triplets file look like this:

image

Did you change something in the deepranking.py?

visheshmistry commented 6 years ago

I just edited the following lines in deepRanking.py:

screen shot 2018-05-26 at 10 59 25 am

screen shot 2018-05-26 at 10 59 35 am

visheshmistry commented 6 years ago

If you have reproduced the code and its working fine, then can you upload the files (except the data) and then share the link?

akarshzingade commented 6 years ago

It's the same files in this repository. No change in the code.

visheshmistry commented 6 years ago

I cloned the repository again, added training-data and triplets.txt, and made the 2 changes shown above. But I am still getting the same error.

I have uploaded all the files in my folder - https://drive.google.com/drive/folders/1p5CgyyR6r0KNXXUecXRF4cA97nUHXut1?usp=sharing

Please see if this works for you (that is the training starts).

Thanks!

akarshzingade commented 6 years ago

I just downloaded the contents of the drive link and ran the deepRanking.py. It works.

dense_3 (None, 4096)
lambda_4 (None, 4096)
Found 1872 images belonging to 4 classes.
Epoch 1/25

  1/629 [..............................] - ETA: 12:42:35 - loss: 1.0080

This is weird. Could it be TF/Keras version mismatch? But, I doubt if it would be. I tried it on TF 1.5.0 and Keras 2.1.3 version.

Sanketyeru commented 6 years ago

I have also tried same thing. Training process starts fine but after few iterations around the first epoch 23/629 it throws error saying class array index out of bound

cesarandreslopez commented 6 years ago

Downloading now to test on my computer. Will report back on my results with @vishesh9494 data.

cesarandreslopez commented 6 years ago

Tried to run it but on that data set, it will crash after step 23 (same thing that @Sanketyeru saw). In my computer, I had to decrement the batch size due to GPU limitation and needed to run the process one epoch at a time (terminating the program after one epoch) another wise I would see a "class array index out of bound"

The model did train and is performing well after 30 epochs or so, but these epochs in my case had to be run once per script start. This, specifically for @vishesh9494 's data in google drive.

akarshzingade commented 6 years ago

Hey, guys. For the "Array index out of bound" error, as a hacky solution for now, can you make sure that the number of triplets are a multiple of (batch_size*3)? This should work, if it doesn't, let me know.

cesarandreslopez commented 6 years ago

That worked. Thank you! @akarshzingade

visheshmistry commented 6 years ago

Hi!

I tried the code on another PC and it worked. I guess my previous machine had some issue. Well, its working fine now!

Thank you @akarshzingade @Sanketyeru @cesarandreslopez

akarshzingade commented 6 years ago

Awesome! Let me know if you have any other questions :)