mir-aidj / all-in-one

All-In-One Music Structure Analyzer
http://arxiv.org/abs/2307.16425
MIT License
370 stars 35 forks source link

Specifying device for inference gives error #18

Open uu95 opened 2 weeks ago

uu95 commented 2 weeks ago

First of all Great work,

I can't understand why but when I try to specify a device like:

result = allin1.analyze(
    'Antonio_Vivaldi_Concerto.wav',
    device="cuda:2",
    overwrite = True,
    out_dir = output_dir,
)

the prediction is all weird and I get errors like "ValueError: zero-size array to reduction operation minimum which has no identity " because the model predictions are weird like this :

[AnalysisResult(path=PosixPath('/home/ubaid/Music_Image_CM/MuIm_model/Music2Image/music2visual_story/output/Antonio_Vivaldi_Concerto.wav'), bpm=None, beats=[], downbeats=[], beat_positions=[], segments=[Segment(start=0.0, end=18.1, label='chorus'), Segment(start=18.1, end=53.94, label='chorus'), Segment(start=53.94, end=69.3, label='chorus'), Segment(start=69.3, end=84.66, label='chorus'), Segment(start=84.66, end=105.14, label='chorus'), Segment(start=105.14, end=135.86, label='chorus'), Segment(start=135.86, end=151.22, label='chorus'), Segment(start=151.22, end=181.94, label='chorus'), Segment(start=181.94, end=197.3, label='chorus'), Segment(start=197.3, end=217.78, label='chorus'), Segment(start=217.78, end=233.14, label='chorus'), Segment(start=233.14, end=248.5, label='chorus'), Segment(start=248.5, end=274.1, label='chorus'), Segment(start=274.1, end=304.82, label='chorus'), Segment(start=304.82, end=325.3, label='chorus'), Segment(start=325.3, end=373.3, label='chorus'), Segment(start=373.3, end=396.98, label='chorus'), Segment(start=396.98, end=417.46, label='chorus'), Segment(start=417.46, end=448.38, label='chorus')], activations=None, embeddings=None)]

however when there is no device given then it works normally. can you specify why this is happening?

tae-jun commented 2 weeks ago

Hi, sorry, but I can't think of any reason for the issue.

Could you try specifying the device with an environment variable? For example: CUDA_VISIBLE_DEVICES=2

uu95 commented 2 weeks ago

Thanks for your reply. Yes, that works. However, I need to use this in a pipeline where specific GPUs are assigned to different tasks. This model needs to run on one GPU, while other tasks use other GPUs. I've tried debugging but can't figure out the issue. It would be great if you could help debug and identify the cause.

uu95 commented 2 weeks ago

I was able to solve this by adding these lines to the helpers.py script inside run_inference() function by using torch device context manager:

so I replaced: logits = model(spec) with:

# Perform inference within the context of the specified device
with torch.no_grad(), torch.cuda.device(device):
  logits = model(spec)
tae-jun commented 2 weeks ago

Oh, wow, thanks for the fix!

Would you like to contribute to this repo by opening a PR?

Otherwise, I can fix it myself.