流式输出内容被截断,只能显示最后 5000 行内容。
。。。。(省略几百条Suc)
/content/dataset/998810017,004.wav->Suc.
end preprocess
2023-07-15 05:20:43 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 05:20:43 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
['extract_f0_print.py', '/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬', '2', 'harvest']
todo-f0-2735
f0ing,now-0,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_0.wav
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:38: FutureWarning: Pass sr=16000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
x, sr = librosa.load(path, self.fs) # , res_type='soxr_vhq'
todo-f0-2734
f0ing,now-0,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_2.wav
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:38: FutureWarning: Pass sr=16000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
x, sr = librosa.load(path, self.fs) # , res_type='soxr_vhq'
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:89: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
f0_coarse = np.rint(f0_mel).astype(np.int)
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:38: FutureWarning: Pass sr=16000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
x, sr = librosa.load(path, self.fs) # , res_type='soxr_vhq'
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:89: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
f0_coarse = np.rint(f0_mel).astype(np.int)
/content/Retrieval-based-Voice-Conversion-WebUI/extract_f0_print.py:38: FutureWarning: Pass sr=16000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
x, sr = librosa.load(path, self.fs) # , res_type='soxr_vhq'
f0ing,now-547,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1641_0.wav
f0ing,now-546,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1640_2.wav
f0ing,now-1094,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_4.wav
f0ing,now-1092,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_1.wav
f0ing,now-1641,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2864_1.wav
f0ing,now-1638,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2861_1.wav
f0ing,now-2188,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/3499_1.wav
f0ing,now-2184,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/3492_1.wav
f0ing,now-2730,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/997_1.wav
['extract_f0_print.py', '/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬', '2', 'harvest']
todo-f0-2735
f0ing,now-0,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_0.wav
todo-f0-2734
f0ing,now-0,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_2.wav
f0ing,now-547,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1641_0.wav
f0ing,now-546,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1640_2.wav
f0ing,now-1094,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_4.wav
f0ing,now-1092,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_1.wav
f0ing,now-1641,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2864_1.wav
f0ing,now-1638,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2861_1.wav
f0ing,now-2730,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/997_1.wav
2023-07-15 06:37:20 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:37:20 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:37:23.568598: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-07-15 06:37:25 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
['extract_feature_print.py', 'cuda:0', '1', '0', '0', '/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬']
/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬
load model(s) from hubert_base.pt
2023-07-15 06:37:26 | INFO | fairseq.tasks.hubert_pretraining | current directory is /content/Retrieval-based-Voice-Conversion-WebUI
2023-07-15 06:37:26 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
2023-07-15 06:37:26 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
move model to cuda
all-feature-5469
now-5469,all-0,0_0.wav,(184, 256)
now-5469,all-546,1312_1.wav,(55, 256)
now-5469,all-1092,1640_1.wav,(50, 256)
now-5469,all-1638,1933_1.wav,(67, 256)
now-5469,all-2184,224_1.wav,(175, 256)
now-5469,all-2730,2521_2.wav,(63, 256)
now-5469,all-3276,2860_1.wav,(132, 256)
now-5469,all-3822,3199_2.wav,(98, 256)
now-5469,all-4368,3491_1.wav,(126, 256)
now-5469,all-4914,647_2.wav,(63, 256)
now-5469,all-5460,996_1.wav,(160, 256)
all-feature-done
['extract_f0_print.py', '/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬', '2', 'harvest']
todo-f0-2735
f0ing,now-0,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_0.wav
todo-f0-2734
f0ing,now-0,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/0_2.wav
f0ing,now-547,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1641_0.wav
f0ing,now-546,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/1640_2.wav
f0ing,now-1094,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_4.wav
f0ing,now-1092,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2250_1.wav
f0ing,now-1641,all-2735,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2864_1.wav
f0ing,now-1638,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/2861_1.wav
f0ing,now-2730,all-2734,-/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬/1_16k_wavs/997_1.wav
['extract_feature_print.py', 'cuda:0', '1', '0', '0', '/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬']
/content/Retrieval-based-Voice-Conversion-WebUI/logs/紬
load model(s) from hubert_base.pt
move model to cuda
all-feature-5469
now-5469,all-0,0_0.wav,(184, 256)
now-5469,all-546,1312_1.wav,(55, 256)
now-5469,all-1092,1640_1.wav,(50, 256)
now-5469,all-1638,1933_1.wav,(67, 256)
now-5469,all-2184,224_1.wav,(175, 256)
now-5469,all-2730,2521_2.wav,(63, 256)
now-5469,all-3276,2860_1.wav,(132, 256)
now-5469,all-3822,3199_2.wav,(98, 256)
now-5469,all-4368,3491_1.wav,(126, 256)
now-5469,all-4914,647_2.wav,(63, 256)
now-5469,all-5460,996_1.wav,(160, 256)
all-feature-done
2023-07-15 06:39:20 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:39:20 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:39:20 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2023-07-15 06:39:25.715063: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:jaxlib.mlir._mlir_libs:Initializing MLIR with module: _site_initialize_0
DEBUG:jaxlib.mlir._mlir_libs:Registering dialects from initializer <module 'jaxlib.mlir._mlir_libs._site_initialize_0' from '/usr/local/lib/python3.10/dist-packages/jaxlib/mlir/_mlir_libs/_site_initialize_0.so'>
DEBUG:jax._src.xla_bridge:No jax_plugins namespace packages available
DEBUG:jax._src.path:etils.epath found. Using etils.epath for file I/O.
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2023-07-15 06:39:31.732074: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:jaxlib.mlir._mlir_libs:Initializing MLIR with module: _site_initialize_0
DEBUG:jaxlib.mlir._mlir_libs:Registering dialects from initializer <module 'jaxlib.mlir._mlir_libs._site_initialize_0' from '/usr/local/lib/python3.10/dist-packages/jaxlib/mlir/_mlir_libs/_site_initialize_0.so'>
DEBUG:jax._src.xla_bridge:No jax_plugins namespace packages available
DEBUG:jax._src.path:etils.epath found. Using etils.epath for file I/O.
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
INFO:紬:{'train': {'log_interval': 200, 'seed': 1234, 'epochs': 20000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 12800, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0}, 'data': {'max_wav_value': 32768.0, 'sampling_rate': 40000, 'filter_length': 2048, 'hop_length': 400, 'win_length': 2048, 'n_mel_channels': 125, 'mel_fmin': 0.0, 'mel_fmax': None, 'training_files': './logs/紬/filelist.txt'}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [10, 10, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4], 'use_spectral_norm': False, 'gin_channels': 256, 'spk_embed_dim': 109}, 'model_dir': './logs/紬', 'experiment_dir': './logs/紬', 'save_every_epoch': 50, 'name': '紬', 'total_epoch': 501, 'pretrainG': 'pretrained/G40k.pth', 'pretrainD': 'pretrained/D40k.pth', 'gpus': '0', 'sample_rate': '40k', 'if_f0': 1, 'if_latest': 0, 'if_cache_data_in_gpu': 1}
WARNING:紬:/content/Retrieval-based-Voice-Conversion-WebUI/train is not a git repository, therefore hash value comparison will be ignored.
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
gin_channels: 256 self.spk_embed_dim: 109
INFO:紬:loaded pretrained pretrained/G40k.pth pretrained/D40k.pth
Traceback (most recent call last):
File "/content/Retrieval-based-Voice-Conversion-WebUI/train_nsf_sim_cache_sid_load_pretrain.py", line 534, in <module>
main()
File "/content/Retrieval-based-Voice-Conversion-WebUI/train_nsf_sim_cache_sid_load_pretrain.py", line 50, in main
mp.spawn(
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/content/Retrieval-based-Voice-Conversion-WebUI/train_nsf_sim_cache_sid_load_pretrain.py", line 148, in run
utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d
File "/content/Retrieval-based-Voice-Conversion-WebUI/train/utils.py", line 206, in latest_checkpoint_path
x = f_list[-1]
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/content/Retrieval-based-Voice-Conversion-WebUI/train_nsf_sim_cache_sid_load_pretrain.py", line 166, in run
net_g.module.load_state_dict(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SynthesizerTrnMs256NSFsid:
Missing key(s) in state_dict: "enc_p.emb_pitch.weight", "dec.m_source.l_linear.weight", "dec.m_source.l_linear.bias", "dec.noise_convs.0.weight", "dec.noise_convs.0.bias", "dec.noise_convs.1.weight", "dec.noise_convs.1.bias", "dec.noise_convs.2.weight", "dec.noise_convs.2.bias", "dec.noise_convs.3.weight", "dec.noise_convs.3.bias".
2023-07-15 06:39:38 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:39:43 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:39:43 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:50:01 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:51:05 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:51:05 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:51:05 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-07-15 06:51:06 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1006, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 847, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/content/Retrieval-based-Voice-Conversion-WebUI/infer-web.py", line 957, in export_onnx
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
KeyError: 'weight'
2023-07-15 07:15:47 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-07-15 07:15:48 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1006, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 847, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/content/Retrieval-based-Voice-Conversion-WebUI/infer-web.py", line 957, in export_onnx
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
KeyError: 'weight'
2023-07-15 07:16:17 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-07-15 07:16:18 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1006, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 847, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/content/Retrieval-based-Voice-Conversion-WebUI/infer-web.py", line 957, in export_onnx
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
KeyError: 'weight'
2023-07-15 07:16:33 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-07-15 07:16:34 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
介绍
带音高的模型,点的一键训练。
训练窗口日志:
step 1: processing data
python3 trainset_preprocess_pipeline_print.py /content/dataset 40000 2 /content/Retrieval-based-Voice-Conversion-WebUI/logs/紬 False
step2a:正在提取音高
python3 extract_f0_print.py /content/Retrieval-based-Voice-Conversion-WebUI/logs/紬 2 harvest
step 2b: extracting features
python3 extract_feature_print.py cuda:0 1 0 0 /content/Retrieval-based-Voice-Conversion-WebUI/logs/紬
step 3a: training the model
write filelist done
python3 train_nsf_sim_cache_sid_load_pretrain.py -e 紬 -sr 40k -f0 1 -bs 12 -g 0 -te 501 -se 50 -pg pretrained/G40k.pth -pd pretrained/D40k.pth -l 0 -c 1
Training completed, you can view the training logs in the console or the train.log within the experiement folder
(498352, 256),11295
training index
adding index
成功构建索引, added_IVF11295_Flat_nprobe_1.index
all processes have been completed!
训练日志
介绍
带音高的模型,点的一键训练。 训练窗口日志:
在weights目录下没有看到模型文件