Open ericbolo opened 6 years ago
I didn't monitor the memory usage in detail. I used a Titan-X GPU (12 G) and 32 G RAM and it worked fine. If you blow up your memory, you can either make your training set smaller (only for testing the model ) or you can dump the data into your hard disk in advance and then read them using tensorflow reading pipeline afterwards during training.
To play with the model, I reduced the embedding dimension to 10, and it runs.
I'll also try with a smaller dataset.
I'll do some memory usage monitoring at some point and post the stats.
Thanks for the quick answer !
On an Amazon g2.2xlarge instance, train_net.py, I get an out-of-memory error.
Stats:
Limit: 3868721152 InUse: 3824706816 MaxInUse: 3825321984 NumAllocs: 35475 MaxAllocSize: 370720768
What are the memory requirements for training the system with 40-dimensional (default) embedding ?