Closed obarnstedt closed 3 years ago
I got the demo working now. I had to include
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
tf.logging.set_verbosity(tf.logging.ERROR)
in the run_dgp_demo.py (line 158) to evade another error message, and then, importantly, decreased --batch_size to 4 (default 10). Interestingly, after it has run through once, I can increase the batch size again to 10 or even 40, without any error messages.
hi, I encountered the same error, but after trying the method added the code, it still did not get resolved. What is the reason for this? Could you help me? Thanks!
Hi Ziyi,
How large is your frame size? If the input frame is large, it's possible that no matter how small the batch size, it won't fit the memory.
Hi, I'm very curious how DGP performs on our existing DLC data, so I installed DGP following instructions on Ubuntu 20.04 with a GeForce RTX 2080 (8GB) and Cuda Toolkit 10.0.130, driver version 450.102.04. On this machine, DLC (2.0.8) works without problems, but I'm running into memory problems when trying the test run 'python demo/run_dgp_demo.py --dlcpath data/Reaching-Mackenzie-2018-08-30 --test'. Memory monitoring shows used memory at about 5.5GB when it tries to allocate an additional 2.53GB. Is there a way to circumvent this error? With DLC, I used to solve this by allowing GPU growth, but I could see in the code this has already been included... Maybe this part of the error message is key:
UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory
Below is the full output. Thanks a lot! Oliver