Closed ingalls closed 1 year ago
I tested this and it works!
Note since this uses the decode config.properties, it uses ports 7080, etc. rather than the typical 8080, which I reserved for the gpu service for local testing in the encode config.properties.
Inference time for encode on the cpu was a whopping 1min 12 seconds for one small 512x512 image @geohacker @ingalls @srmsoumya @batpad Not sure if we should be bundling the encode on the CPU even for demos/quick development given how long encode takes.
Not sure if we should be bundling the encode on the CPU even for demos/quick development given how long encode takes.
I think it's ok! Let's have an always-on CPU. We can probably get a bigger spot cpu later on but let's continue this architecture for now.
Context
Develop a CPU based cloudformation based deployment.