An error occurred as follows during the process of changing the default decoder to DVAE (inferring with use_decoder=False). Could it be attributed to an incompatible version of vector_quantize_pytorch==1.17.3? However, I have attempted vector-quantize-pytorch==1.16.1, vector-quantize-pytorch==1.15.5, and vector-quantize-pytorch==1.14.24.
File "/workspace/ChatTTS/ChatTTS/model/dvae.py", line 95, in _embed
feat = self.quantizer.get_output_from_indices(x)
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 248, in get_output_from_indices
outputs = tuple(rvq.get_output_from_indices(chunk_indices) for rvq, chunk_indices in zip(self.rvqs, indices))
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 248, in <genexpr>
outputs = tuple(rvq.get_output_from_indices(chunk_indices) for rvq, chunk_indices in zip(self.rvqs, indices))
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 134, in get_output_from_indices
codes = self.get_codes_from_indices(indices)
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 120, in get_codes_from_indices
all_codes = all_codes.masked_fill(rearrange(mask, 'b n q -> q b n 1'), 0.)
RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [2,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
An error occurred as follows during the process of changing the default decoder to DVAE (inferring with
use_decoder=False
). Could it be attributed to an incompatible version ofvector_quantize_pytorch==1.17.3
? However, I have attemptedvector-quantize-pytorch==1.16.1
,vector-quantize-pytorch==1.15.5
, andvector-quantize-pytorch==1.14.24
.environment:
Would you be able to offer me some suggestions, please?