Hi, I have a trained yolov4-tiny model and I am looking for the best and cheapest way to deploy to AWS while maintaining 30fps. I read that t2.micro should be used for training #1380, however is this good enough for inference in this case? Since I'm using the tiny model is it possibile to choose a cheaper instance?
1380 says to use t2.micro to create the AMI, not to train a model. Trial and error method should allow you to quickly figure out the FPS-cost curve. I would start with cheapest gpu instance. Results will also depend on whether you use darknet or other package to infer.
Hi, I have a trained yolov4-tiny model and I am looking for the best and cheapest way to deploy to AWS while maintaining 30fps. I read that t2.micro should be used for training #1380, however is this good enough for inference in this case? Since I'm using the tiny model is it possibile to choose a cheaper instance?