openai / jukebox

Code for the paper "Jukebox: A Generative Model for Music"
https://openai.com/blog/jukebox/
Other
7.78k stars 1.4k forks source link

Error importing torch #80

Open tobeyond opened 4 years ago

tobeyond commented 4 years ago

When I try:

C:\Users\...\jukebox>python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 \ --total_sample_length_in_seconds=180 --sr=44100 --n_samples=1 --hop_fraction=0.5,0.5,0.125

I get this:

Traceback (most recent call last):
  File "jukebox/sample.py", line 3, in <module>
    import torch as t
  File "C:\Users\...\.conda\envs\jukebox\lib\site-packages\torch\__init__.py", line 81, in <module>
    ctypes.CDLL(dll)
  File "C:\Users\...\.conda\envs\jukebox\lib\ctypes\__init__.py", line 364, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found

Any ideas?

johndpope commented 4 years ago

do you have nvidia card? cuda ? Looks like it didn't install properly. UPDATE https://stackoverflow.com/questions/61488902/cannot-import-pytorch-winerror-126-the-specified-module-could-not-be-found

tobeyond commented 4 years ago

Not sure. How do I know that?

I proceeded with conda install pytorch torchvision cpuonly -c pytorch

Now I get this:

Using cuda False
Traceback (most recent call last):
  File "jukebox/sample.py", line 275, in <module>
    fire.Fire(run)
  File "C:\Users\...\.conda\envs\jukebox\lib\site-packages\fire\core.py", line 127, in Fire
    component_trace = _Fire(component, args, context, name)
  File "C:\Users\...\.conda\envs\jukebox\lib\site-packages\fire\core.py", line 366, in _Fire
    component, remaining_args)
  File "C:\Users\...\.conda\envs\jukebox\lib\site-packages\fire\core.py", line 542, in _CallCallable
    result = fn(*varargs, **kwargs)
  File "jukebox/sample.py", line 267, in run
    rank, local_rank, device = setup_dist_from_mpi(port=port)
  File "c:\users\...\jukebox\jukebox\utils\dist_utils.py", line 55, in setup_dist_from_mpi
    torch.cuda.set_device(local_rank)
  File "C:\Users\...\.conda\envs\jukebox\lib\site-packages\torch\cuda\__init__.py", line 245, in set_device
    torch._C._cuda_setDevice(device)
AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'
tobeyond commented 4 years ago

I also tried adding --device cpu and --gpu_ids -1 to my command line but it didn't work. Nor did changing torch._C._cuda_setDevice(device) to torch._C._cuda_setDevice(-1).

johndpope commented 4 years ago

frankly - if you have an AMD card / radeon or are using a mac - then this project isn't for you unless you get an nvidia card. People are struggling to run this on cards with 8gbs of gpu ram. To run on a cpu would take > days to create seconds of audio. Use the google collab / TPU route if you want to get results today.

tobeyond commented 4 years ago

I'm on Windows 10 with 16GB of RAM and it's Intel Core i7. Do you think it works?

johndpope commented 4 years ago

Not talking about desktop RAM - it's GPU RAM. You need a high end graphics card - like the hightest. Volta 100 / RTX 5000 - it works if you run on google colab / they use Google's TPUs.

tobeyond commented 4 years ago

I don't have a dedicated graphics card. Is it mandatory?

combs commented 4 years ago

yes--they say it requires a graphics card (GPU) with 16gb of ram (usd $8000). Or you can play with it in Google CoLab for $0 and a lot of patience

xandramax commented 4 years ago

they say it requires a graphics card (GPU) with 16gb of ram

The stated 16gb requirement is for the 5_b model. This model generates with better quality but was also trained on fewer artists and genres (the v2 artist/genre IDs are for 5_b and the v3 IDs are for 1_b).

The 1_b model might not satisfy if you're after something like 1-click generation of Kanye performing Eminem, but I find it's very exciting when it comes to co-composition with a little patience. I run batches of 3 four-second samples at a time when co-composing, and almost always find something interesting within 2 or 3 batches.

Speaking from this experience, it's very much possible to explore model 1_b with an RTX 2080 Super. The card has only 8gb of VRAM. Having watched the GPU memory usage and having run into out-of-memory errors when attempting larger batch sizes, I believe that 8gb is likely the minimum possible.

That means it is probably possible to run the 1_b model on a card as old as a GTX 1070 (also 8gb VRAM).

With a 1080 Ti or 2080 Ti (11gb VRAM), or a Titan X or Titan V (12gb VRAM), it might even be possible to run the 5_b model with a small batch size but that's a total guess. It would have to be tested.

The Titan RTX has 24gb VRAM and costs $2.5k; much less than the $8k for a V100.

Worth considering is Nvidia's stated "compute capability" for their various cards (viewable here). The V100 is listed with a compute capability of 7.0, whereas the RTX line is 7.5. For comparison, the K80 that Colab sometimes hands out only has compute capability of 3.7 (still enough for exploring 1_b), and a GTX 1070 has compute capability 6.1.

All that to say, yes, a dedicated graphics card is mandatory but you may be able to spend as little as ~$300 to get going at home with the 1_b model on a 1070.

combs commented 4 years ago

thank you, that's good to know!

tobeyond commented 4 years ago

Thank you very much for the clarification. I've never used google colab and I'm a little confused here. Does anyone have a step-by-step explanation on how to run it there?

johndpope commented 4 years ago

https://www.youtube.com/results?search_query=google+colab&page=&utm_source=opensearch

cicinwad commented 2 years ago

I'm on Windows 10 with 16GB of RAM and it's Intel Core i7. Do you think it works?

You need to be on windows 8, it will work better.