jishengpeng / WavTokenizer

SOTA discrete acoustic codec models with 40 tokens per second for audio language modeling
MIT License
830 stars 46 forks source link

About infer in GPU #13

Open JohnFengNeumann opened 2 months ago

JohnFengNeumann commented 2 months ago

Hello,@jishengpeng . I'm testing your job, but I found the code for wavtokenizer.decode can't be inferred in GPU. Could you tell me how I can fix this problem?

jishengpeng commented 2 months ago

Hello,@jishengpeng . I'm testing your job, but I found the code for wavtokenizer.decode can't be inferred in GPU. Could you tell me how I can fix this problem?

To utilize the GPU during inference, simply change 'cpu' to 'cuda:0' in the device settings. Ensure that all inputs to the decoder, such as features and bandwidth, are placed on the GPU.