使用webui推理报错:
Traceback (most recent call last):
File "/data/workpace/fish-speech/fish_speech/webui/app.py", line 241, in inference
resp.raise_for_status()
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: for url: http://192.168.1.16:8000/v1/models/default/invoke
Traceback (most recent call last):
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/queueing.py", line 522, in process_events
response = await route_utils.call_process_api(
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/route_utils.py", line 260, in call_process_api
output = await app.get_blocks().process_api(
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/blocks.py", line 1698, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/blocks.py", line 1540, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/components/audio.py", line 269, in postprocess
raise ValueError(f"Cannot process {value} as Audio")
ValueError: Cannot process [] as Audio
API报错:
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
SyntaxError: unterminated string literal (detected at line 1) (, line 1)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
使用webui推理报错: Traceback (most recent call last): File "/data/workpace/fish-speech/fish_speech/webui/app.py", line 241, in inference resp.raise_for_status() File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: for url: http://192.168.1.16:8000/v1/models/default/invoke Traceback (most recent call last): File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/queueing.py", line 522, in process_events response = await route_utils.call_process_api( File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/route_utils.py", line 260, in call_process_api output = await app.get_blocks().process_api( File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/blocks.py", line 1698, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/blocks.py", line 1540, in postprocess_data prediction_value = block.postprocess(prediction_value) File "/data/workpace/fish-speech/.conda/lib/python3.10/site-packages/gradio/components/audio.py", line 269, in postprocess raise ValueError(f"Cannot process {value} as Audio") ValueError: Cannot process [] as Audio
API报错: torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: SyntaxError: unterminated string literal (detected at line 1) (, line 1)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True
2024-04-08 03:05:56,689 ERROR "POST /v1/models/default/invoke HTTP/1.1" 500 NoneType: None