Open lian999111 opened 4 years ago
i tried running the code and the below results (without any variation): Epoch: 0: Train Loss: 202.61404418945312, Train Accuracy: 0.9569480419158936 Epoch: 0: Test Accuracy: 0.9776443243026733 Epoch: 1: Train Loss: 139.39718627929688, Train Accuracy: 0.9824979901313782 Epoch: 1: Test Accuracy: 0.9873206615447998 Epoch: 2: Train Loss: 134.21328735351562, Train Accuracy: 0.984662652015686 Epoch: 2: Test Accuracy: 0.9828717708587646 Epoch: 3: Train Loss: 126.84203338623047, Train Accuracy: 0.9865127205848694 Epoch: 3: Test Accuracy: 0.9782004356384277 Epoch: 4: Train Loss: 132.76873779296875, Train Accuracy: 0.9853286743164062 Epoch: 4: Test Accuracy: 0.9874318838119507 Epoch: 5: Train Loss: 119.70941925048828, Train Accuracy: 0.9892324209213257 Epoch: 5: Test Accuracy: 0.9894338846206665 Epoch: 6: Train Loss: 116.26309204101562, Train Accuracy: 0.9901944398880005 Epoch: 6: Test Accuracy: 0.9856523275375366 Epoch: 7: Train Loss: 127.61954498291016, Train Accuracy: 0.9881038069725037 Epoch: 7: Test Accuracy: 0.986208438873291 Epoch: 8: Train Loss: 123.03273010253906, Train Accuracy: 0.9886958599090576 Epoch: 8: Test Accuracy: 0.9819819927215576 Epoch: 9: Train Loss: 97.82347106933594, Train Accuracy: 0.9923035502433777 Epoch: 9: Test Accuracy: 0.986208438873291 2 & 2: 0.054906852543354034 2 & 5: 1.4142134189605713 2 & 6: 1.4142135381698608 5 & 5: 5.960464477539063e-08 6 & 6: 0.38758668303489685 5 & 6: 1.4142134189605713 9 & 9: 5.960464477539063e-08 5 & 9: 1.4142134189605713 6 & 9: 1.4142134189605713
how long is it taking for running it on your laptop? mine took about 8 mins. i didn't check with the inbuilt function. should we?
For me, it takes 1 min to finish the entire main script when using tensorflow-gpu. I notice that your results show large discrepancies in the cases "5 & 5" and "9 & 9", which are strange. If you didn't change anything, I think you trained it with digits 0 - 8, didn't you? You can check it on line 12. The following uses labels from 0 to 8.
used_labels = list(range(0, 9)) # the labels to be loaded
Sorry. My bad. I didn't see the e-08 at the end... So your results are quite what we expected. Still, I didn't expect such a big difference in results between machines.
so currently i shall change the values and see whtr i can contribute in improving the performance or so. i shall try visualise the dataset that i am getting here and see the difference visually. humberto suggested visualisation in SNE, based on how the task 1 and 2 goes i shall work on humberto's suggestion.
Visualization sounds like a good suggestion. In order to use T-SNE, I remember there are some built-in features in Tensorboard, which comes with Tensorflow. I haven't tried it personally but might be worth checking out.
Hey, @nilahnair. I just did the first push of center loss method. Please have a look at utils_center_loss.py and main_center_loss.py when you are free.
Here are some preliminary tests on the Euclidean distance between encodings of different mnist images:
Format: [Number of Image 1] & [Number of Image 2]: [Distance Between Encodings]
Trained using 0 - 8: 2 & 2: 0.4499303996562958 2 & 5: 1.3837429285049438 2 & 6: 1.3376760482788086 5 & 5: 0.1271803379058838 6 & 6: 0.3412046432495117 5 & 6: 1.3841427564620972 9 & 9: 0.8249539732933044 5 & 9: 1.3930341005325317 6 & 9: 1.4107062816619873
Trained using 0 - 5: 2 & 2: 0.3311769366264343 2 & 5: 1.3843328952789307 2 & 6: 1.3454242944717407 5 & 5: 0.31822484731674194 6 & 6: 1.3699769973754883 5 & 6: 1.0688936710357666 9 & 9: 0.6681417226791382 5 & 9: 1.120006799697876 6 & 9: 1.21673583984375
@HumbertoE I found some bugs in the code I show you and Shrutarv last time. This time 9 & 9 actually works well in both cases where the model never sees 9s during training. However, it doesn't work well with 6 when only trained on 0 - 5. Some more analysis is needed.