Closed zhwongbo closed 6 months ago
me too.
我用web方式训练时,也报如下错误,不知如何解决? File "/usr/lib/python3.10/multiprocessing/connection.py", line 448, in init self._listener = SocketListener(address, family, backlog) File "/usr/lib/python3.10/multiprocessing/connection.py", line 591, in init self._socket.bind(address) OSError: [Errno 38] Function not implemented
did anyone find a workaround?
+1,所有依赖正确安装,运行go-web后,前两步数据集处理和特征提取是可以正常运行的,就是训练会报errno38的错误,后面我试着去把python版本改到3.8.10,但还是同样的报错,可见应该不是python版本的问题?看报错日志是生成converter这步出了问题?(没能找到解决方法,作为一点补充)
I have similar problem in WSL2 and Docker.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 244, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/local/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/usr/local/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 370, in reduce_storage
df = multiprocessing.reduction.DupFd(fd)
File "/usr/local/lib/python3.10/multiprocessing/reduction.py", line 198, in DupFd
return resource_sharer.DupFd(fd)
File "/usr/local/lib/python3.10/multiprocessing/resource_sharer.py", line 53, in __init__
self._id = _resource_sharer.register(send, close)
File "/usr/local/lib/python3.10/multiprocessing/resource_sharer.py", line 76, in register
self._start()
File "/usr/local/lib/python3.10/multiprocessing/resource_sharer.py", line 126, in _start
self._listener = Listener(authkey=process.current_process().authkey)
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 448, in __init__
self._listener = SocketListener(address, family, backlog)
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 591, in __init__
self._socket.bind(address)
OSError: [Errno 95] Operation not supported
Traceback (most recent call last):
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/app/train_nsf_sim_cache_sid_load_pretrain.py", line 228, in run
train_and_evaluate(
File "/app/train_nsf_sim_cache_sid_load_pretrain.py", line 358, in train_and_evaluate
for batch_idx, info in data_iterator:
File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1328, in _next_data
idx, data = self._get_data()
File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1284, in _get_data
success, data = self._try_get_data()
File "/usr/local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1145, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 149) exited unexpectedly
Related issues: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/155 https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/165 https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/286 https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/635 https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/890
I bought a google colab subscription to be able to use RVC, and yet I get this annoying error! OSError: [Errno 38] Function not implemented. I don't have any background in coding, just a frustrated musician. how can I solve this?
I stumbled upon this as well. For me ut turned out that cloning the repo into a mounted network drive (google drive) did not work. When I cloned the repo to the colab native filesystem it worked as expected.
This issue was closed because it has been inactive for 15 days since being marked as stale.
/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI Use Language: en_US 2023-06-05 08:45:52.699775: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-06-05 08:45:54.602041: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Running on local URL: http://127.0.0.1:7860 Running on public URL: https://c75a05a3a9ed8fa300.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run
gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces) start preprocess ['trainset_preprocess_pipeline_print.py', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/dataset/sunyanzi', '40000', '2', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', 'False'] end preprocess start preprocess ['trainset_preprocess_pipeline_print.py', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/dataset/sunyanzi', '40000', '2', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', 'False'] end preprocess['extract_f0_print.py', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', '2', 'harvest'] todo-f0-1261 todo-f0-1260 f0ing,now-0,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_0.wav f0ing,now-0,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_1.wav f0ing,now-252,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_17.wav f0ing,now-252,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_18.wav f0ing,now-504,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_58.wav f0ing,now-504,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_57.wav f0ing,now-756,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_50.wav f0ing,now-756,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_5.wav f0ing,now-1008,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_21.wav f0ing,now-1008,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_20.wav f0ing,now-1260,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/9_9.wav ['extract_f0_print.py', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', '2', 'harvest'] todo-f0-1261 f0ing,now-0,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_0.wav todo-f0-1260 f0ing,now-0,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_1.wav f0ing,now-252,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_17.wav f0ing,now-252,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_18.wav f0ing,now-504,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_58.wav f0ing,now-504,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_57.wav f0ing,now-756,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_50.wav f0ing,now-756,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_5.wav f0ing,now-1008,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_21.wav f0ing,now-1008,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_20.wav f0ing,now-1260,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/9_9.wav
2023-06-05 08:51:52.421884: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT ['extract_feature_print.py', 'cuda:0', '1', '0', '0', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', 'v1'] /content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi load model(s) from hubert_base.pt 2023-06-05 08:51:56 | INFO | fairseq.tasks.hubert_pretraining | current directory is /content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI 2023-06-05 08:51:56 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False} 2023-06-05 08:51:56 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] 4 + [(512,2,2)] 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False} move model to cuda all-feature-2521 all-feature-done ['extract_f0_print.py', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', '2', 'harvest'] todo-f0-1261 f0ing,now-0,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_0.wav todo-f0-1260 f0ing,now-0,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/0_1.wav f0ing,now-252,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_17.wav f0ing,now-252,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/15_18.wav f0ing,now-504,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_58.wav f0ing,now-504,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/20_57.wav f0ing,now-756,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_50.wav f0ing,now-756,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/27_5.wav f0ing,now-1008,all-1260,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_21.wav f0ing,now-1008,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/3_20.wav f0ing,now-1260,all-1261,-/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi/1_16k_wavs/9_9.wav ['extract_feature_print.py', 'cuda:0', '1', '0', '0', '/content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi', 'v1'] /content/drive/MyDrive/Retrieval-based-Voice-Conversion-WebUI/logs/sunyanzi load model(s) from hubert_base.pt move model to cuda all-feature-2521 all-feature-done
2023-06-05 08:52:04.399134: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:jaxlib.mlir._mlir_libs:Initializing MLIR with module: _site_initialize_0 DEBUG:jaxlib.mlir._mlir_libs:Registering dialects from initializer <module 'jaxlib.mlir._mlir_libs._site_initialize_0' from '/usr/local/lib/python3.10/dist-packages/jaxlib/mlir/_mlir_libs/_site_initialize_0.so'> DEBUG:jax._src.path:etils.epath found. Using etils.epath for file I/O. INFO:numexpr.utils:NumExpr defaulting to 2 threads. 2023-06-05 08:52:11.796595: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:jaxlib.mlir._mlir_libs:Initializing MLIR with module: _site_initialize_0 DEBUG:jaxlib.mlir._mlir_libs:Registering dialects from initializer <module 'jaxlib.mlir._mlir_libs._site_initialize_0' from '/usr/local/lib/python3.10/dist-packages/jaxlib/mlir/_mlir_libs/_site_initialize_0.so'> DEBUG:jax._src.path:etils.epath found. Using etils.epath for file I/O. INFO:numexpr.utils:NumExpr defaulting to 2 threads. INFO:sunyanzi:{'train': {'log_interval': 200, 'seed': 1234, 'epochs': 20000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 5, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 12800, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0}, 'data': {'max_wav_value': 32768.0, 'sampling_rate': 40000, 'filter_length': 2048, 'hop_length': 400, 'win_length': 2048, 'n_mel_channels': 125, 'mel_fmin': 0.0, 'mel_fmax': None, 'training_files': './logs/sunyanzi/filelist.txt'}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [10, 10, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4], 'use_spectral_norm': False, 'gin_channels': 256, 'spk_embed_dim': 109}, 'model_dir': './logs/sunyanzi', 'experiment_dir': './logs/sunyanzi', 'save_every_epoch': 5, 'name': 'sunyanzi', 'total_epoch': 20, 'pretrainG': 'pretrained/f0G40k.pth', 'pretrainD': 'pretrained/f0D40k.pth', 'version': 'v1', 'gpus': '0', 'sample_rate': '40k', 'if_f0': 1, 'if_latest': 0, 'save_every_weights': '0', 'if_cache_data_in_gpu': 0} INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0 INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes. /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary. warnings.warn(_create_warning_msg( gin_channels: 256 self.spk_embed_dim: 109 INFO:sunyanzi:loaded pretrained pretrained/f0G40k.pth pretrained/f0D40k.pth