Closed hcl777 closed 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Describe the bug
sing model directly error:
use_deepspeed = True config = XttsConfig() config.load_json(tts_path + "config_v1.json") model = Xtts1.init_from_config(config) model.load_checkpoint(config, checkpoint_dir=tts_path, use_deepspeed=use_deepspeed, eval=True) model.cuda(1)
To Reproduce
Traceback (most recent call last): File "/data/work/tts/TTS-dev-231007/test1.py", line 67, in
xtts_test()
File "/data/work/tts/TTS-dev-231007/test1.py", line 42, in xtts_test
outputs = model.synthesize(
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 451, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 473, in inference_with_config
return self.full_inference(text, ref_audio_path, language, settings)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 558, in full_inference
return self.inference(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 620, in inference
gpt_codes = self.gpt.generate(
File "/data/work/tts/TTS-dev-231007/TTS/tts/layers/xtts/gpt.py", line 546, in generate
gen = self.gpt_inference.generate(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1648, in generate
return self.sample(
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2730, in sample
outputs = self(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/layers/xtts/gpt_inference.py", line 97, in forward
transformer_outputs = self.transformer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 900, in forward
outputs = block(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 171, in forward
self.attention(input,
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 154, in forward
qkv_out = self.qkv_func(input=input,
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/ops/transformer/inference/op_binding/qkv_gemm.py", line 82, in forward
output, norm = self.qkv_gemm_func(input, weight, q_scale, bias, gamma, beta, self.config.epsilon, add_bias,
ValueError: Specified device cuda:1 does not match device of data cuda:0
Expected behavior
No response
Logs
No response
Environment
Additional context
No response