报错为
load_model_config modle\chatglm-6b-int4...
Loading modle\chatglm-6b-int4...
No compiled kernel found.
Compiling kernels : C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.c
Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.c -shared -o C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.so
Load kernel : C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.so
Setting CPU quantization kernel threads to 8
Using quantization cache
Applying quantization to glm layers
Loaded the model in 5.40 seconds.
Backend TkAgg is interactive backend. Turning interactive mode on.
module 'models' has no attribute 'ChatGLM'
File "E:\astrochat\langchain-ChatGLM-master\models\shared.py", line 41, in loaderLLM
provides_class = getattr(sys.modules['models'], llm_model_info['provides'])
File "E:\astrochat\langchain-ChatGLM-master\webui.py", line 106, in init_model
llm_model_ins = shared.loaderLLM()
File "E:\astrochat\langchain-ChatGLM-master\webui.py", line 333, in
model_status = init_model()
AttributeError: module 'models' has no attribute 'ChatGLM'
Is there an existing issue for this?
Current Behavior
报错为 load_model_config modle\chatglm-6b-int4... Loading modle\chatglm-6b-int4... No compiled kernel found. Compiling kernels : C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.c -shared -o C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.so Load kernel : C:\Users\24507.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\quantization_kernels_parallel.so Setting CPU quantization kernel threads to 8 Using quantization cache Applying quantization to glm layers Loaded the model in 5.40 seconds. Backend TkAgg is interactive backend. Turning interactive mode on.
module 'models' has no attribute 'ChatGLM' File "E:\astrochat\langchain-ChatGLM-master\models\shared.py", line 41, in loaderLLM provides_class = getattr(sys.modules['models'], llm_model_info['provides']) File "E:\astrochat\langchain-ChatGLM-master\webui.py", line 106, in init_model llm_model_ins = shared.loaderLLM() File "E:\astrochat\langchain-ChatGLM-master\webui.py", line 333, in
model_status = init_model()
AttributeError: module 'models' has no attribute 'ChatGLM'
请问应该怎么解决?
Expected Behavior
No response
Steps To Reproduce
*
Environment
Anything else?
No response