So I have an application for this, but I'd rather not spend $150+/month on an AWS instance to run the CPU version of neuraltalk2.
I understand it's a very memory intensive program, but I honestly have quite a bit of CPU headroom to spare (minutes, even).
Is there a way/could there be a way to sacrifice processing power for memory usage (i.e. use much less memory at the cost of longer processing times)? My costs a month are linear to my memory usage, not my processing time/usage.
I would imagine I'm not the only one with this issue.
It's worth mentioning I'm using the COCO pre-trained image for CPU. There is no GPU available on AWS instances.
If this is a matter of reading the entirety of the trained model in, could it be mmap()'d in? The virtual address space isn't an issue at all (64-bit machine) and I'd be willing to take the performance hit.
So I have an application for this, but I'd rather not spend $150+/month on an AWS instance to run the CPU version of neuraltalk2.
I understand it's a very memory intensive program, but I honestly have quite a bit of CPU headroom to spare (minutes, even).
Is there a way/could there be a way to sacrifice processing power for memory usage (i.e. use much less memory at the cost of longer processing times)? My costs a month are linear to my memory usage, not my processing time/usage.
I would imagine I'm not the only one with this issue.
It's worth mentioning I'm using the COCO pre-trained image for CPU. There is no GPU available on AWS instances.
If this is a matter of reading the entirety of the trained model in, could it be
mmap()
'd in? The virtual address space isn't an issue at all (64-bit machine) and I'd be willing to take the performance hit.