nateraw / Lda2vec-Tensorflow

Tensorflow 1.5 implementation of Chris Moody's Lda2vec, adapted from @meereeum
MIT License
107 stars 40 forks source link

OOM with GPU computing #19

Closed cyberyu closed 5 years ago

cyberyu commented 5 years ago

Great package!

I tried to speed up my computation on a cloud based GPU P2 (K80 GPU) instance using an ancient Tensorflow-GPU package (1.2.1). I think it doesn't have run option setting for report_tensor_allocations_upon_oom=True, so I set my batch size to 10.

I found it still went OOM when trying to build a graph of (10967, 20). I attached the screen shot here. Maybe the batch size is not actually used in some place of the code? Just curious.

I haven't tried this on my own GPU yet, not sure the same error occurs on more recent TF package with the correct run option setting.

(Updated. I just ran the code on EC2 instance with TF-GPU 1.09, and it has no error. I guess the issue is incompatibility with old TF package. However, fixing that OOM may require some changes in the code.

2019-02-17_1131
nateraw commented 5 years ago

Thanks for your feedback! I have no idea how to fix this. Do you have any insight?

nateraw commented 5 years ago

Closing because I don't think this is in scope. Definitely still open to suggestions on this, though. Thanks!