Open browhattheheck1 opened 4 years ago
Ive been hammering away at getting this running on mac and windows and can't get it going. Seems it's linux only based on the code's coupling to distributed pytorch but I could be very wrong, huge noob in python / ML.
Correction, am now much closer than I was before to getting this running, if I succeed I'll post steps
I've managed to run the inference on windows
The main problem is the coupling with torch.distributed, which is not needed for the inference, so:
get_rank
and get_world_size
calls with stubs, since you are running it on a single node, your rank is always 0 and the world size is 1barrier
calls along the execution path, same reasonlogger.py
because of the reduce
logic in thereartist_genre_processor.py
, you have to provide explicit encoding="utf-8"
when opening the labels fileop=dist.ReduceOp.SUM
> op=None
the code looks super clumsy now but works
I've arrived at the same place haha. Are you able to run samples and get audio out? My computer is suddenly having a RAM issue and now won't start so this whole experiment is on pause for me :(
Yep, I've managed to get the audio with 1b_lyrics and n_samples=1 PC configuration is rtx2070 + i7 4790, 24Gb RAM
Took significantly more time than stated in the article, about 4 hours for 10 seconds of audio
Understandable since the times in the article I believe are generated using a Titan X. Do you have a fork I could try?
Not at the moment unfortunately, need to tidy up the code a bit. I think I'll come up with the windows version by the end of the week.
Ive added a windows compatible version I have up and running w/ sampling. Updated the readme https://github.com/peterlazzarino/jukebox
@peterlazzarino I had to manually download the models, but this is working for me. Thanks!
@btrude good catch, updated the readme
How do you manually download the sample? (sry not very technical here lol)
hey ocuvox.
When you get it running it will print messages in the console, you should see Downloading from gce
and using cuda True
it will then hit an error if you dont have the models, it will look something like this
['wget', '-q', '-O', 'C:\\Users\\peter/.cache\\jukebox-assets/models/5b/vqvae.pth.tar', 'https://storage.googleapis.com/jukebox-assets/models/5b/vqvae.pth.tar'] Traceback (most recent call last): File "jukebox/sample.py", line 220, in <module>
The web URL to storage.googleapis.com is the link you will paste into your browser to download the file it expects. The path that is on your local machine is where you will put it. The folder, like C:\\Users\\peter/.cache\\jukebox-assets/models/5b/
has already been created, you just need to go there and drop the file once it finishes.
These are large files, 2-10GB and there are 4 of them needed to run the 1b_lyrics model
Does this support the latest python version?
Thanks @peterlazzarino ! Had hard times to set it up but could make it run on windows in 1B_lyrics. I'm a total beginner, so I hope you people with good knowledge will go on and adapt the rest! Best
@peterlazzarino Would it be possible to enable issues on the fork?
On Windows it is possible to not have to download the packages manually if you have wget.exe in your PATH. The model download fails since it uses wget which is mostly used with linux. Windows binaries do exist.
@camjac251 enabled, thanks
so the main repo should support inference for windows now except for downloading the models, I'll fix that by the end of this week
@gnhdnb great to hear!
I wanted to install this but I cannot figure out how to do it at all, and a lot of the commands seem to be linux libraries so idk. If anyone could tell me how to install it like I'm a 5 year old child that would help lmao