Just saw other post about multi-gpu -- closing as dup - Phil
In the original neural-style code there is a form of model parallelism that allows the VGG net to be split across multiple GPUs.
Is there any reason to think that adding similar code to this implementation is fundamentally bound to fail? In other words, is there something about applying the optical flow to video frames that requires that the model be maintained on a single GPU?
Just saw other post about multi-gpu -- closing as dup - Phil
In the original neural-style code there is a form of model parallelism that allows the VGG net to be split across multiple GPUs.
Is there any reason to think that adding similar code to this implementation is fundamentally bound to fail? In other words, is there something about applying the optical flow to video frames that requires that the model be maintained on a single GPU?