carpedm20 / ENAS-pytorch

PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Apache License 2.0
2.69k stars 492 forks source link

gpu_nums> 1 #16

Open lianqing11 opened 6 years ago

lianqing11 commented 6 years ago

If want to run on multi gpu, when self.shared is forwarding , should use Modulelist's data (like self._w_h(which is a type of ModuleList)). Otherwise will raise an error :( RuntimeError: tensors are on different GPUs) , beacuse when self.forward(xx), the parameter are used stored in list data structure, and would not replicate to another gpu.