modelscope / FunASR

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
https://www.funasr.com
Other
4.76k stars 527 forks source link

离线音频识别完成后,内存没有释放 #1808

Open ryancurry-mz opened 1 month ago

ryancurry-mz commented 1 month ago

🐛 Bug

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

麻烦大佬有空时能解答解答,十分感谢。

基于/runtime/python/http/server.py的代码做了简单的修改,具体的代码会在下面贴出。 存在问题:识别多个离线音频后,内存没有释放,最后导致内存被打满。

ba9b03730b7ad77f10b411ef00b9b98

Code sample

import argparse
import logging
import os
import uuid
import gc

import aiofiles
import ffmpeg
import uvicorn
from fastapi import FastAPI, File, UploadFile
from modelscope.utils.logger import get_logger

from funasr import AutoModel
from itn.chinese.inverse_normalizer import InverseNormalizer

logger = get_logger(log_level=logging.INFO)
logger.setLevel(logging.INFO)

parser = argparse.ArgumentParser()
parser.add_argument(
    "--host", type=str, default="0.0.0.0", required=False, help="host ip, localhost, 0.0.0.0"
)
parser.add_argument("--port", type=int, default=8000, required=False, help="server port")
parser.add_argument(
    "--asr_model",
    type=str,
    # default="paraformer-zh",
    default="/soft/FunASR/model/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn",
    help="asr model from https://github.com/alibaba-damo-academy/FunASR?tab=readme-ov-file#model-zoo",
)
parser.add_argument("--asr_model_revision", type=str, default="v2.0.4", help="")
parser.add_argument(
    "--vad_model",
    type=str,
    # default="fsmn-vad",
    default="/soft/FunASR/model/speech_fsmn_vad_zh-cn-16k-common-pytorch/",
    help="vad model from https://github.com/alibaba-damo-academy/FunASR?tab=readme-ov-file#model-zoo",
)
parser.add_argument("--vad_model_revision", type=str, default="v2.0.4", help="")
parser.add_argument(
    "--punc_model",
    type=str,
    # default="ct-punc-c",
    default="/soft/FunASR/model/punc_ct-transformer_cn-en-common-vocab471067-large/",
    help="model from https://github.com/alibaba-damo-academy/FunASR?tab=readme-ov-file#model-zoo",
)
parser.add_argument("--punc_model_revision", type=str, default="v2.0.4", help="")

# 说话人识别/分割
parser.add_argument("--spk_model_revision", type=str, default="v2.0.4", help="")
parser.add_argument(
    "--spk_model",
    type=str,
    # default="cam++",
    default="/soft/FunASR/model/speech_campplus_sv_zh-cn_16k-common/",
    help="model from https://github.com/alibaba-damo-academy/FunASR?tab=readme-ov-file#model-zoo",
)

parser.add_argument("--ngpu", type=int, default=0, help="0 for cpu, 1 for gpu")
parser.add_argument("--device", type=str, default="cpu", help="cuda, cpu")
parser.add_argument("--ncpu", type=int, default=4, help="cpu cores")
parser.add_argument(
    "--hotword_path",
    type=str,
    default="hotwords.txt",
    help="hot word txt path, only the hot word model works",
)
parser.add_argument("--certfile", type=str, default=None, required=False, help="certfile for ssl")
parser.add_argument("--keyfile", type=str, default=None, required=False, help="keyfile for ssl")
parser.add_argument("--temp_dir", type=str, default="temp_dir/", required=False, help="temp dir")
args = parser.parse_args()
logger.info("-----------  Configuration Arguments -----------")
for arg, value in vars(args).items():
    logger.info("%s: %s" % (arg, value))
logger.info("------------------------------------------------")

os.makedirs(args.temp_dir, exist_ok=True)

logger.info("model loading")
# load funasr model
model = AutoModel(
    model=args.asr_model,
    model_revision=args.asr_model_revision,
    vad_model=args.vad_model,
    vad_model_revision=args.vad_model_revision,
    punc_model=args.punc_model,
    punc_model_revision=args.punc_model_revision,
    spk_model=args.spk_model,
    spk_model_revision=args.spk_model_revision,
    ngpu=args.ngpu,
    ncpu=args.ncpu,
    device=args.device,
    disable_pbar=True,
    disable_log=True,
)
logger.info("loaded models!")

app = FastAPI(title="FunASR")

param_dict = {"sentence_timestamp": False, "batch_size_s": 50}
if args.hotword_path is not None and os.path.exists(args.hotword_path):
    with open(args.hotword_path, "r", encoding="utf-8") as f:
        lines = f.readlines()
        lines = [line.strip() for line in lines]
    hotword = " ".join(lines)
    logger.info(f"热词:{hotword}")
    param_dict["hotword"] = hotword

@app.post("/recognition")
async def api_recognition(audio: UploadFile = File(..., description="audio file")):
    suffix = audio.filename.split(".")[-1]
    audio_path = f"{args.temp_dir}/{str(uuid.uuid1())}.{suffix}"
    async with aiofiles.open(audio_path, "wb") as out_file:
        content = await audio.read()
        await out_file.write(content)
    try:
        audio_bytes, _ = (
            ffmpeg.input(audio_path, threads=0)
            .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=16000)
            .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
        )
    except Exception as e:
        logger.error(f"读取音频文件发生错误,错误信息:{e}")
        return {"msg": "读取音频文件发生错误", "code": 1}
    rec_results = model.generate(input=audio_bytes, is_final=True, **param_dict)
    logger.error(f"识别结果rec_results:{rec_results}")
    # 结果为空
    if len(rec_results) == 0:
        return {"text": "", "sentences": [], "code": 0}
    elif len(rec_results) == 1:
        # 解析识别结果
        rec_result = rec_results[0]
        # text = rec_result["text"]
        # 文本逆正则化
        invnormalizer = InverseNormalizer(cache_dir="/soft/FunASR/model/fst_itn_zh/")
        text = invnormalizer.normalize(rec_result["text"])
        sentences = []
        for sentence in rec_result["sentence_info"]:
            # 每句话的时间戳
            sentences.append(
                {"spk": sentence["spk"],
                 "text": invnormalizer.normalize(sentence["text"]),
                 "start": sentence["start"],
                 "end": sentence["end"]}
            )
        ret = {"text": text, "sentences": sentences, "code": 0}
        logger.info(f"识别结果:{ret}")
        # 强制进行垃圾回收
        gc.collect()
        return ret
    else:
        logger.info(f"识别结果:{rec_results}")
        return {"msg": "未知错误", "code": -1}

if __name__ == "__main__":
    uvicorn.run(
        app, host=args.host, port=args.port, ssl_keyfile=args.keyfile, ssl_certfile=args.certfile
    )

Expected behavior

每次识别完成后,内存应该被释放,而不是一直增长。

Environment

Additional context

我用的是CPU版本,虚拟机配置是4核8G,我通过上面的server.py启动需要20-30分钟,识别1分钟双人对话离线音频需要3分钟,不知道是不是配置太低的原因。

yeyupiaoling commented 2 weeks ago

@ryancurry-mz 你好,我这边测试了一下,使用的是项目中原本的代码,测试的音频是1分42秒的,使用GPU推理时间为1.5秒。使用CPU推理时间为7.6秒,推理时间是正常的,应该是你的设备问题。

启动server.py我这边不超过10秒

另外我重复推理请100次,无论是使用GPU还是CPU,内存都没有变化,并没有你说的内存不断增长,你要排除其他代码的影响。

yeyupiaoling commented 2 weeks ago

@ryancurry-mz 你是林外修改了代码吧,你检查下是不是你添加哪些代码影响的

ryancurry-mz commented 2 weeks ago

@ryancurry-mz 你是林外修改了代码吧,你检查下是不是你添加哪些代码影响的

我在原本server.py的基础上修改了几处地方,添加了说话人识别和逆文本正则化,不确定是否是这里的影响,我再检查下。感谢大佬回复!

# 说话人识别/分割
parser.add_argument("--spk_model_revision", type=str, default="v2.0.4", help="")
parser.add_argument(
    "--spk_model",
    type=str,
    # default="cam++",
    default="/soft/FunASR/model/speech_campplus_sv_zh-cn_16k-common/",
    help="model from https://github.com/alibaba-damo-academy/FunASR?tab=readme-ov-file#model-zoo",
)
# 文本逆正则化
        invnormalizer = InverseNormalizer(cache_dir="/soft/FunASR/model/fst_itn_zh/")
        text = invnormalizer.normalize(rec_result["text"])