Open ghost opened 8 years ago
I don't think the memory bottleneck will be a problem, since there isn't really any CPU / GPU communication in the hotloop.
However I'm not totally sure that the C870 will work at all with Torch, since it came out in 2007 and only has a CUDA compute capability of 1.0. It's worth a try though!
I was actually referring to the PCI-e bottleneck I may be introducing on it.
That's what I was referring to - with PCI-e x1 you'll have lower bandwidth (and maybe higher latency?) between the GPU and CPU; however this isn't a big issue because neural-style doesn't use a lot of CPU / GPU communication.
The fact that you only have 1.5GB of memory is also an issue, but you can maybe work with that by using a smaller model.
Ah, got it. Hopefully it'll run!
I scored 4 Nvidia Tesla C870s for a total of $20 on Ebay. I was hoping I could mount them in a milk crate and connect them over PCI-e x1. Will this cause bottlenecking? The cards only have 1.5GB of VRAM each.