Closed nathanielrindlaub closed 1 year ago
Sorry, the best answer is "we don't know". The last time that classifier training pipeline was run, it was likely on an Azure NC6v3 instance, and we probably never tested it on anything smaller, and definitely never tested it without a GPU. Here are some random facts in random order that may be helpful:
Sorry we don't have an easier answer!
Amazing, no this is all super helpful. Thank you @agentmorris!!
I recently stepped through your classifier training workflow in an AWS SageMaker Studio Lab instance, and was able to begin fitting and efficientnet-b3 with my own data, but I quickly exhausted the available memory (15GB) and then later disk space (25GB). I think SageMaker Studio Lab is geared towards learning ML and running some simple experiments–it's also free–so it's not terribly surprising that I maxed it out right out of the gate. That said, before I start shopping around for a new classifier training environment, do you happen to have benchmarks on how much memory and disk space the classifier training process will consume?