RVC-Project / Retrieval-based-Voice-Conversion-WebUI

Easily train a good VC model with voice data <= 10 mins!
MIT License
21.22k stars 3.25k forks source link

Crash with AttributeError: 'tuple' object has no attribute 'split' when trying to convert to onnx #2173

Open cushycrux opened 2 weeks ago

cushycrux commented 2 weeks ago

Hi guys, I hope someone can help. installed the large all-in-one windows repo. When I try to convert pth to onnx the app traces back. go_web.bat is starting well. When in WebGui on export onnx tab. . My Input (example):

RVC Model Path: F:\pth\Shadowheart_250E.pth Onnx Export Path: F:\pth\Shadowheart_250E..onnx

What happens:

Running on local URL:  http://0.0.0.0:7897
============= Diagnostic Run torch.onnx.export version 2.0.0+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last):
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
    output = await app.blocks.process_api(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\gradio\blocks.py", line 1006, in process_api
    result = await self.call_function(fn_index, inputs, iterator, request)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\gradio\blocks.py", line 847, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "F:\RVC1006Nvidia\infer-web.py", line 171, in export_onnx
    eo(ModelPath, ExportedPath)
  File "F:\RVC1006Nvidia\infer\modules\onnx\export.py", line 29, in export_onnx
    torch.onnx.export(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\onnx\utils.py", line 506, in export
    _export(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\onnx\utils.py", line 1548, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\onnx\utils.py", line 1113, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\onnx\utils.py", line 989, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\onnx\utils.py", line 893, in _trace_and_get_graph_from_model
    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\jit\_trace.py", line 1268, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\jit\_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\jit\_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "F:\RVC1006Nvidia\infer\lib\infer_pack\models_onnx.py", line 650, in forward
    z = self.flow(z_p, x_mask, g=g, reverse=True)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "F:\RVC1006Nvidia\infer\lib\infer_pack\models_onnx.py", line 152, in forward
    x = flow(x, x_mask, g=g, reverse=reverse)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\nn\modules\module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "F:\RVC1006Nvidia\infer\lib\infer_pack\modules.py", line 519, in forward
    x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
  File "F:\RVC1006Nvidia\runtime\lib\site-packages\torch\functional.py", line 189, in split
    return tensor.split(split_size_or_sections, dim)
AttributeError: 'tuple' object has no attribute 'split'

Whelp please?

cushycrux commented 2 weeks ago

fixe it myself with this hint: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/1519 please implement!