tflearn / tflearn

Deep learning library featuring a higher-level API for TensorFlow.
http://tflearn.org
Other
9.62k stars 2.41k forks source link

Problem in using GPU with tflearn #862

Open hossein-amirkhani opened 7 years ago

hossein-amirkhani commented 7 years ago

I am running the alexnet example. I found out that the running is really slow (1 second for each batch, while I have a 1080 GPU). By changing my code to the following, I got InvalidArgumentError.

with tf.device('/gpu:0'):
    model = tflearn.DNN(network, checkpoint_path='model_alexnet',
                                max_checkpoints=1, tensorboard_verbose=2)
    model.fit(X, Y, n_epoch=1000, validation_set=0.1, shuffle=True,
            show_metric=True, batch_size=64, snapshot_step=200,
            snapshot_epoch=False, run_id='alexnet_oxflowers17')

I have the gpu compatible version of tensorflow installed, and there is no problem in using pure tensorflow with gpu. Thanks for your help in advance.

BBarbosa commented 7 years ago

Hi @h-amirkhani! I think already faced this before. Do you have any GPU monitor program? If yes, try it and see if computation is being done on CPU or GPU. However, TFLearn has a primitive to control the GPU usage. Try adding the line somewhere above those lines you presented tflearn.init_graph(num_cores=4,gpu_memory_fraction=0.5)

Hope it helps