(chatglm) n:\github\GLM-4>python openai_api_lby.py
2024-06-12 15:24:16,061 - Start initialize model...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "n:\github\GLM-4\openai_api_lby.py", line 86, in
bot = ChatGLM()
File "n:\github\GLM-4\openai_api_lby.py", line 45, in init
self.model = self._load_model(model_name)
File "n:\github\GLM-4\openai_api_lby.py", line 50, in _load_model
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map='auto', quantization_config=BitsAndBytesConfig(load_in_4bit=True))
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\models\auto\auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\modeling_utils.py", line 3165, in from_pretrained
hf_quantizer.validate_environment(
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes
已经安装了transformers的 还是报错
(chatglm) n:\github\GLM-4>pip show transformers
Name: transformers
Version: 4.40.0
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
License: Apache 2.0 License
Location: n:\miniconda3\envs\chatglm\lib\site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by: peft, sentence-transformers
(chatglm) n:\github\GLM-4>python openai_api_lby.py 2024-06-12 15:24:16,061 - Start initialize model... Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Traceback (most recent call last): File "n:\github\GLM-4\openai_api_lby.py", line 86, in
bot = ChatGLM()
File "n:\github\GLM-4\openai_api_lby.py", line 45, in init
self.model = self._load_model(model_name)
File "n:\github\GLM-4\openai_api_lby.py", line 50, in _load_model
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map='auto', quantization_config=BitsAndBytesConfig(load_in_4bit=True))
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\models\auto\auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\modeling_utils.py", line 3165, in from_pretrained
hf_quantizer.validate_environment(
File "n:\miniconda3\envs\chatglm\lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using
bitsandbytes
8-bit quantization requires Accelerate:pip install accelerate
and the latest version of bitsandbytes:pip install -i https://pypi.org/simple/ bitsandbytes
已经安装了transformers的 还是报错 (chatglm) n:\github\GLM-4>pip show transformers Name: transformers Version: 4.40.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: transformers@huggingface.co License: Apache 2.0 License Location: n:\miniconda3\envs\chatglm\lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by: peft, sentence-transformers