When I try to compute d_vector before training YourTTS model, this error occurs:
RuntimeError: CUDA error: out of memory
Maybe somebody faced it and could help?
Full text:
/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/cuda/init.py:104: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
File "TTS/bin/compute_embeddings.py", line 46, in
speaker_manager = SpeakerManager(
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/tts/utils/speakers.py", line 84, in init
self.init_speaker_encoder(encoder_model_path, encoder_config_path)
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/tts/utils/speakers.py", line 274, in init_speaker_encoder
self.speaker_encoder.load_checkpoint(config_path, model_path, eval=True, use_cuda=self.use_cuda)
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/speaker_encoder/models/resnet.py", line 244, in load_checkpoint
self.cuda()
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 491, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 491, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
When I try to compute d_vector before training YourTTS model, this error occurs:
RuntimeError: CUDA error: out of memory
Maybe somebody faced it and could help?
Full text:
/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/cuda/init.py:104: UserWarning: NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Traceback (most recent call last): File "TTS/bin/compute_embeddings.py", line 46, in
speaker_manager = SpeakerManager(
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/tts/utils/speakers.py", line 84, in init
self.init_speaker_encoder(encoder_model_path, encoder_config_path)
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/tts/utils/speakers.py", line 274, in init_speaker_encoder
self.speaker_encoder.load_checkpoint(config_path, model_path, eval=True, use_cuda=self.use_cuda)
File "/home/vlasova/projects/speechdetox_eng/train_yourtts_ru/Coqui-TTS/TTS/speaker_encoder/models/resnet.py", line 244, in load_checkpoint
self.cuda()
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 491, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/vlasova/anaconda3/envs/train_yourtts_env_38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 491, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory