soubhiksanyal / RingNet

Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
https://ringnet.is.tue.mpg.de
MIT License
821 stars 170 forks source link

Minimum gpu memory requirement #54

Open Babylonehy opened 3 years ago

Babylonehy commented 3 years ago

Hi, I tried to run the demo on RTX2060 6G, unfortunately ran out of gpu memory, so I want to know what is the minimum GPU memory requirement?

GPU:0 with 591 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:09:00.0, compute capability: 7.5)
Restoring checkpoint ./model/ring_6_68641..
218
Resizing so the max image size is 224..
2020-11-13 22:02:41.101218: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 530.16MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.101256: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 530.16MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.117759: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 536.34MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.117785: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 536.34MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.123718: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.06GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.123733: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.06GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.125294: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 274.25MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.125308: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 274.25MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.135688: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 545.31MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-13 22:02:41.135703: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 545.31MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
GungnirASHTTTTT commented 3 years ago

please refer to this: gpu = tf.config.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(gpu[0], True)