intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.43k stars 1.24k forks source link

GLM-4-9B-Chat missing 'import math' #11277

Open luoxi-github opened 2 months ago

luoxi-github commented 2 months ago

vesion: 2.1.0b20240610

error: ipex_llm/transformers/models/chatglm4.py", line 342, in core_attn_forward NameError: name 'math' is not defined

qiuxin2012 commented 2 months ago

It's fixed now, you can update your ipex-llm to latest version and try again.