Open thejacket opened 5 years ago
It's hard to tell without code. If you can link me a repo with your fork, it would be a lot easier.
Also you posted two training accuracy graphs, what does validation look like over the training?
Thank you sir for quick reply
I plotted the data from different try on one graph:
Pushed my changes here: (there are training and validation images in the repo too) https://github.com/thejacket/Computer-Vision-Basics-with-Python-Keras-and-OpenCV/blob/master/notebook.ipynb
I've tried to tinker with the ImageDataGeneration parameters but still the results are pretty bad. Also I have recorded the training data several times, trying to move my hand, fingers a little but with no positive results.
Did you take any look? I've also had an idea that due to dilation the picture's hand structure is too thick (having in consideration how NN's pooling algorithms work) so i tried to erode instead - it gives me much more skeleton-like pictures. Unfortunately still no look, and the model is skewed towards predicting 'five'
I seem to be having a similar problem, any fix? @thejacket @jrobchin
Sorry for not responding earlier. I was never able to diagnose the problem. I would say try the model that is provided in the model folder first and make sure it’s not an issue with the code.
Well I've managed to run through all the steps in notebook, recorded the data, run augmentation script and model isn't predicting gestures properly, I have 4 gestures in my model: number five, number zero (as in sign language), number one (point) and fist. Prediction is very much skewed towards the number five, also it jumps very abruptly between predictions
Oddly enough the training accuracy and validation accuracy are very high
Example bad prediction:
What could be the reason? I followed all steps rigorously, I have around 800 images for training and 100 for validation for each class
Thank you for the tutorial!