Open kcrosley-leisurelabs opened 4 years ago
Same problem here. I think this is because of the progress output which "rewrites itself" but is still adding to the output bytes of data. you can see it replay the whole thing if your connection is interrupted and the tab reloads.
Is there a way to disable the progress output? I had a look in sample.py but couldn't see how the output is generated.
it looks like the output is provided by tqdm
in logger.py:
def def_tqdm(x):
return tqdm(x, leave=True, file=sys.stdout, bar_format="{n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]")
looking around for nice interfaces to disable it...
ok, I forked the repo and changed it from 0.1 second update interval to 10 seconds. I'm running it now in colab pro and am seeing many fewer status updates. (side benefit: much lower browser CPU usage.)
You can try it out too by factory resetting your runtime and changing the pip install line to this:
!pip install git+https://github.com/combs/jukebox.git
Hey @combs, thanks for looking into this! I've not checked out your clone yet, but appreciate the info!
(BTW, you've probably seen this, but lots of discussion of mods to the Colab notebook over in #40. Also, while there's not much there just yet, I'm posting interesting jukebox output over in my repo here: https://github.com/kcrosley-leisurelabs/jukebox-renders)
Cheers, Keith
You guys got it to work in Colab? Can you send me a link to your working notebooks?
@Tylersuard, I think the issue here is merely one of logging output... seems the process is still running, if I understand correctly. Let it rip... (I think)... I currently have a session like this running. Rather than interrupt it, I think I'll just let it go and see what happens.
But see #40 for nifty mods to the original Colab notebook that enable picking up where you left off and a lot more...
Update: Yeah, when you hit this "output size limit" issue, upsampling gets stuck and does not complete. Trying @combs solution of reducing log frequency on my own repo copy. (Thanks again for the tip!)
I'm seeing conflicting info about whether or not the process is still running in the background, is it?
Yes, I've had more than one upsampling job that continued running to completion long after receiving the "Buffered data was truncated..." message.
So, what is this? (I mean, I know what it is, because it says.) But is there a solution for this? Is the problem here that one has simply run out of (GPU?) memory? I get this sometimes during Level 0 rendering on Colab (and I use Colab Pro).
Would limiting the batch size (which I have set as 3) help this? It's a bummer when this happens.
Anybody with any insights, do let me know!
Thanks!