jcjohnson / neural-style

Torch implementation of neural style algorithm
MIT License
18.3k stars 2.71k forks source link

Possible to Swap VRAM out into system RAM to simulate Multi-GPU? #472

Open NaLG opened 5 years ago

NaLG commented 5 years ago

As similarly noted in jcjohnson/neural-style/issues/410 - I've noticed that for multi-GPU runs, only one GPU seems to be processing at a time, while the memory is split between all of them. In my runs, a 3000x2000 pixel image between 4 GPUs, it only swaps which GPU is active only a handful of times.

How feasible would it be to have a swap space in system memory (or even pagefile) for multiple 'virtual' GPUs? The single active 'virtual GPU' has its VRAM loaded in from swap while those layers are being updated, and swapped back out to make room for the next 'virtual GPU's memory.

A single high-VRAM GPU and additional Optane/NVMe/RAM is much easier to get than multiple high VRAM cards. Is there anything technically speaking about neural-style that would prevent this?