Closed NTNguyen13 closed 6 years ago
Hi @NTNguyen13 , you can specify by passing a context param during compile. this is only for mxnet backend. it's a list of strings, you can specify any gpu available for the model. For example, using 2 GPUs for model1 and 2 GPUs for model2. With this, you don't need to use Keras multi_gpu_model
API
model1.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'], context= ["gpu(0)", "gpu(1)"])
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'], context= ["gpu(2)", "gpu(3)"])
Hi @roywei, it works wonder, thank you very much
I really like the low memory usage of MXNet, I have 4 GPUs on the same computer, I want to use each of them for a different network training, so I can train 4 networks simultaneously. How can I utilize this feature in Keras with MXNet backend? Currently, when I initiate another training, they will still train on the same GPU and that hinders the performance