LoSealL / VideoSuperResolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
MIT License
1.62k stars 296 forks source link

gpu question #39

Closed wanghai111 closed 5 years ago

wanghai111 commented 5 years ago

Why can't I use the gpu training network?

LoSealL commented 5 years ago

What's your executing command exactly? And your environment?

ruyiluo commented 5 years ago

Hello, How to use GPU to train my model? When I run python run.py --model=srcnn --dataset=div2k

Then, terminal shows: 2019-05-31 16:15:27.635084: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA INFO:tensorflow:Fitting: SRCNN | 2019-05-31 16:18:11 | Epoch: 1/50 | LR: 0.01 | 10%|##########6 | 21/200 [02:54<22:52, 7.67s/batch, loss=17403.28125]

When I input nvidia-smi in terminal `Fri May 31 16:18:36 2019
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 105... Off | 00000000:01:00.0 Off | N/A | | N/A 41C P0 N/A / N/A | 0MiB / 4040MiB | 0% Default | +-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+` So , How can I use GPU to train my model? Thanks.

LoSealL commented 5 years ago

If you have installed tensorflow-gpu correctly.

tf.test.is_built_with_cuda() True

tf.InteractiveSession() 2019-05-31 16:29:32.297519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate(GHz): 1.076 pciBusID: 0000:65:00.0 totalMemory: 11.92GiB freeMemory: 11.80GiB 2019-05-31 16:29:32.297551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-31 16:29:32.298484: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-31 16:29:32.298496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-31 16:29:32.298503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-31 16:29:32.298831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11479 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:65:00.0, compute capability: 5.2) 2019-05-31 16:29:32.301932: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. <tensorflow.python.client.session.InteractiveSession object at 0x7fac0c6b4e80>

ruyiluo commented 5 years ago

You're right. That's ok!

Thanks.