karpathy / neuraltalk2

Efficient Image Captioning code in Torch, runs on GPU
5.51k stars 1.26k forks source link

Insane memory usage - is there a tradeoff? #120

Open Qix- opened 8 years ago

Qix- commented 8 years ago

So I have an application for this, but I'd rather not spend $150+/month on an AWS instance to run the CPU version of neuraltalk2.

I understand it's a very memory intensive program, but I honestly have quite a bit of CPU headroom to spare (minutes, even).

Is there a way/could there be a way to sacrifice processing power for memory usage (i.e. use much less memory at the cost of longer processing times)? My costs a month are linear to my memory usage, not my processing time/usage.

I would imagine I'm not the only one with this issue.

It's worth mentioning I'm using the COCO pre-trained image for CPU. There is no GPU available on AWS instances.


If this is a matter of reading the entirety of the trained model in, could it be mmap()'d in? The virtual address space isn't an issue at all (64-bit machine) and I'd be willing to take the performance hit.