Open phobrain opened 7 years ago
Finding that a 1080 ti is limited to training set size of <50 pairs of 299x299x3 pics with a Siamese Inception v3, and given that Inception v3 was trained on batches of 1600 pics distributed 32 to a GPU, it looks like it would require a grant to put the machinery together to do what I want, or a team of collaborators willing to pitch in. Pairs might benefit from larger batches than single pics? In any case, the ideal case for exploration now seems to be using greyscale histograms (150 dimensions), scaling up to rgb 32x32x32 histograms.
First issue in scaling up:
Looks like one needs to analyze all the photos to train with, to get preprocess() factors like in this example:
http://blog.outcome.io/pytorch-quick-start-classifying-an-image/
Seems someone would have written a program by now to take a set of images and output the numbers for
Here's a stab at it:
Resulting in this initial loader, keras version works, this one untested.
Here's how the keras file-load-preproc portion looks: