zhr1201 / deep-clustering

A tensorflow implementation for Deep clustering: Discriminative embeddings for segmentation and separation
135 stars 70 forks source link

Memory requirements #4

Open ericbolo opened 6 years ago

ericbolo commented 6 years ago

On an Amazon g2.2xlarge instance, train_net.py, I get an out-of-memory error.

Stats:

Limit: 3868721152 InUse: 3824706816 MaxInUse: 3825321984 NumAllocs: 35475 MaxAllocSize: 370720768

What are the memory requirements for training the system with 40-dimensional (default) embedding ?

zhr1201 commented 6 years ago

I didn't monitor the memory usage in detail. I used a Titan-X GPU (12 G) and 32 G RAM and it worked fine. If you blow up your memory, you can either make your training set smaller (only for testing the model ) or you can dump the data into your hard disk in advance and then read them using tensorflow reading pipeline afterwards during training.

ericbolo commented 6 years ago

To play with the model, I reduced the embedding dimension to 10, and it runs.

I'll also try with a smaller dataset.

I'll do some memory usage monitoring at some point and post the stats.

Thanks for the quick answer !