openai / jukebox

Code for the paper "Jukebox: A Generative Model for Music"
https://openai.com/blog/jukebox/
Other
7.82k stars 1.41k forks source link

GPU out-of-memory on second iteration of co-composing 1B #32

Open RobertRankinTR opened 4 years ago

RobertRankinTR commented 4 years ago

The GPU keeps hitting out-of-memory errors on the 2nd iteration of co-composing using 1B model. Are there some memory-management steps (cache, deleting un-wanted samples) in-between co-composing iterations that can free-up memory without breaking the chain of desired conditioning?

GregLouis commented 4 years ago

Got the same problem, any clue?

stevedipaola commented 4 years ago

same here?

xandramax commented 4 years ago

Related #51

Working with 8gb VRAM, I'm managing to get 1b co-composition to run through ~64 seconds of generated audio (~15 iterations) before it chokes. These are the settings I'm currently using:

hps.n_samples = 3
hps.sample_length = 786432 
chunk_size = 32
max_batch_size = 3
xandramax commented 4 years ago

Something curious I've noticed, which is perhaps unrelated: level 2 wav files are a higher bitrate than level 1 and level 0 upsampled wav files. The level 2 files store 32 bits per sample, whereas levels 1 and 0 store 16 bits per sample. The result is that level 2 files take up twice as much space on disk.