USTC-Video-Understanding / I3D_Finetune

TensorFlow code for finetuning I3D model on UCF101.
144 stars 43 forks source link

请问下完全按照默认配置运行,显卡也是1080,报这个显存低的警告是正常的吗? #8

Open taojian1989 opened 6 years ago

taojian1989 commented 6 years ago

(base) junyi@junyi-all-series:/sda1/github/I3D_Finetune$ CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1 2018-07-18 10:12:48.698756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582 pciBusID: 0000:05:00.0 totalMemory: 10.91GiB freeMemory: 10.35GiB 2018-07-18 10:12:48.698786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1) INFO:tensorflow:Restoring parameters from ./data/checkpoints/rgb_imagenet/model.ckpt ----Here we start!---- Output wirtes to output/finetune-ucf101-rgb-1 2018-07-18 10:13:08.888794: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.61GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-07-18 10:13:08.901823: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.82GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-07-18 10:13:08.974460: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.89GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-07-18 10:13:09.062963: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.54GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-07-18 10:13:09.117757: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.23GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. Epoch1, train accuracy: 0.042 Epoch2, train accuracy: 0.500

panna19951227 commented 5 years ago

你好,请问你在跑这个程序的时候有没有注意过gpu的利用率?我在实验的时候发现,gpu利用率一直为0%,而gpu的显存占用率却很高