FunAudioLLM / CosyVoice

Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
https://funaudiollm.github.io/
Apache License 2.0
5.49k stars 566 forks source link

使用fastapi进行流式推理服务,经常出现卡死,此时gpu利用率为100% #472

Open Jerry-Kon opened 3 days ago

Jerry-Kon commented 3 days ago

code:

for i, j in enumerate(
    model.inference_zero_shot(text, prompt_text, prompt_speech_16k, stream=True)
):
    new_dir = "./audio/" + str(uuid.uuid4()) + ".wav"
    torchaudio.save(
        new_dir,
        j["tts_speech"],
        22050,
        encoding="PCM_S",
        bits_per_sample=16,
    )
    with open(new_dir, "rb") as f:
        data = f.read()
        base64_data = base64.b64encode(data)
        base64_str = str(base64_data, "utf-8")
    yield json.dumps({"audiourl": base64_str}) + "\n\n"
    os.remove(new_dir)
Jerry-Kon commented 3 days ago

设备是4*4090 虚拟环境为torch2.0.1+cu118 一共起了4个实例副本,其中有一个卡死。 gpu使用率监视图如下: image