yl4579 / AuxiliaryASR

Joint CTC-S2S Phoneme-level ASR for Voice Conversion and TTS (Text-Mel Alignment)
MIT License
111 stars 30 forks source link

Multiple GPU training and changing to librosa mel spec? #9

Closed crypticsymmetry closed 1 year ago

crypticsymmetry commented 1 year ago

Hello again, Is there multiple gpu training for this repo? Also do you have any training logs I can compare with? Thanks!

ill also post this question here as its for this repo...

Should i convert from torchaudio to librosa in AuxiliaryASR and PitchExtractor or just leave it with torchaudio? something like this? chatgpt converted:

import librosa import numpy as np

DEFAULT_DICT_PATH = osp.join(osp.dirname(file), 'word_index_dict.txt') SPECT_PARAMS = { "n_fft": 1024, "win_length": 1024, "hop_length": 256 } MEL_PARAMS = { "n_mels": 80, "n_fft": 1024, "win_length": 1024, "hop_length": 256 }

class MelDataset(torch.utils.data.Dataset): def init(self, data_list, dict_path=DEFAULT_DICT_PATH, sr=24000 ):

    spect_params = SPECT_PARAMS
    mel_params = MEL_PARAMS

    _data_list = [l[:-1].split('|') for l in data_list]
    self.data_list = [data if len(data) == 3 else (*data, 0) for data in _data_list]
    self.text_cleaner = TextCleaner(dict_path)
    self.sr = sr

    self.to_melspec = librosa.feature.melspectrogram(**MEL_PARAMS)
    self.mean, self.std = -4, 4
    ```
crypticsymmetry commented 1 year ago

Got multi-gpu (2, Nvidia-T4-gpus) working with Colossal Ai library(https://colossalai.org/), not sure if its doing much but iters/s is a bit faster went from 1-1.35ish to 1.5-2iters/s. does that seem right just curious? Edit: probably didn't do much.

from logging import StreamHandler
import colossalai

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = StreamHandler()
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)

torch.backends.cudnn.benchmark = True

@click.command()
@click.option('-p', '--config_path', default='Configs/config.yml', type=str)
def main(config_path):
    colossalai.launch_from_torch(config="Configs/config.py")
    config = yaml.safe_load(open(config_path))
    log_dir = config['log_dir']
    if not osp.exists(log_dir): os.mkdir(log_dir)
    shutil.copy(config_path, osp.join(log_dir, osp.basename(config_path)))
    ...

config.py

from colossalai.amp import AMP_TYPE

fp16 = dict(
  mode=AMP_TYPE.TORCH
  # below are default values for grad scaler
)

parallel = dict(
    tensor=dict(size=2, mode='1d')
)

gradient_accumulation = 4
clip_grad_norm = 1.0

rank=0
world_size=1
host="localhost"
port=29500

Logs:

/bin/bash: /opt/conda/lib/libtinfo.so.6: no version information available (required by /bin/bash)
[02/22/23 04:27:13] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:521 set_device          
[02/22/23 04:27:13] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:521 set_device          
                    INFO     colossalai - colossalai - INFO: process rank 1 is  
                             bound to device 1                                  
                    INFO     colossalai - colossalai - INFO: process rank 0 is  
                             bound to device 0                                  
[02/22/23 04:27:16] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:557 set_seed            
[02/22/23 04:27:16] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:557 set_seed            
                    INFO     colossalai - colossalai - INFO: initialized seed on
                             rank 1, numpy: 1024, python random: 1024,          
                             ParallelMode.DATA: 1024, ParallelMode.TENSOR:      
                             1025,the default parallel seed is                  
                             ParallelMode.DATA.                                 
                    INFO     colossalai - colossalai - INFO: initialized seed on
                             rank 0, numpy: 1024, python random: 1024,          
                             ParallelMode.DATA: 1024, ParallelMode.TENSOR:      
                             1024,the default parallel seed is                  
                             ParallelMode.DATA.                                 
                    INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/i
                             nitialize.py:120 launch                            
                    INFO     colossalai - colossalai - INFO: Distributed        
                             environment is initialized, data parallel size: 1, 
                             pipeline parallel size: 1, tensor parallel size: 2 
/bin/bash: /opt/conda/lib/libtinfo.so.6: no version information available (required by /bin/bash)
[02/22/23 04:27:13] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:521 set_device          
[02/22/23 04:27:13] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:521 set_device          
                    INFO     colossalai - colossalai - INFO: process rank 1 is  
                             bound to device 1                                  
                    INFO     colossalai - colossalai - INFO: process rank 0 is  
                             bound to device 0                                  
[02/22/23 04:27:16] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:557 set_seed            
[02/22/23 04:27:16] INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/c
                             ontext/parallel_context.py:557 set_seed            
                    INFO     colossalai - colossalai - INFO: initialized seed on
                             rank 1, numpy: 1024, python random: 1024,          
                             ParallelMode.DATA: 1024, ParallelMode.TENSOR:      
                             1025,the default parallel seed is                  
                             ParallelMode.DATA.                                 
                    INFO     colossalai - colossalai - INFO: initialized seed on
                             rank 0, numpy: 1024, python random: 1024,          
                             ParallelMode.DATA: 1024, ParallelMode.TENSOR:      
                             1024,the default parallel seed is                  
                             ParallelMode.DATA.                                 
                    INFO     colossalai - colossalai - INFO:                    
                             /opt/conda/lib/python3.7/site-packages/colossalai/i
                             nitialize.py:120 launch                            
                    INFO     colossalai - colossalai - INFO: Distributed        
                             environment is initialized, data parallel size: 1, 
                             pipeline parallel size: 1, tensor parallel size: 2 
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
{'max_lr': 0.0001, 'pct_start': 0.0, 'epochs': 200, 'steps_per_epoch': 15502}
[train]:   0%|                                        | 0/15502 [00:00<?, ?it/s]{'max_lr': 0.0001, 'pct_start': 0.0, 'epochs': 200, 'steps_per_epoch': 15502}
[train]:   0%|                                        | 0/15502 [00:00<?, ?it/s]'
'
'
'
''

'
'
'
'
'
'
'
'
'
'
/kaggle/AuxiliaryASR/AuxiliaryASR/trainer.py:158: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  mel_input_length = mel_input_length // (2 ** self.model.n_down)
/kaggle/AuxiliaryASR/AuxiliaryASR/trainer.py:158: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  mel_input_length = mel_input_length // (2 ** self.model.n_down)
'
'
'
'
'
'
[train]:   0%|                             | 1/15502 [00:13<56:25:46, 13.11s/it]'
'
'
'
'
'
''

[train]:   0%|                             | 4/15502 [00:18<14:02:20,  3.26s/it]'
'
[train]:   0%|                             | 4/15502 [00:18<14:09:01,  3.29s/it]'
'
[train]:   0%|                             | 5/15502 [00:20<12:39:14,  2.94s/it]'
'
'
'
'
'
yl4579 commented 1 year ago

Not sure if it works, but the simplest way is https://github.com/yl4579/StarGANv2-VC/issues/4