intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.75k stars 1.27k forks source link

Fix speech_paraformer issue with unexpected changes #12416

Closed sgwhat closed 6 days ago

sgwhat commented 6 days ago

Description

Hotfix for Paraformer model issue due to unexpected changes from version 2.2.0b20241118.

1. Why the change?

2. User API changes

3. Summary of the change

4. How to test?

jason-dai commented 6 days ago

Do we need to specify which Paraformer version that we support?

sgwhat commented 6 days ago

Do we need to specify which Paraformer version that we support?

Sure, added related version requirement in our document.