THUDM / GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Apache License 2.0
4.75k stars 386 forks source link

是否可以提供int4量化的版本? #15

Closed mjzcng closed 3 months ago

mjzcng commented 3 months ago

我注意到base内的README提到了BF16和INT4两种精度的模型显存占用和生成速度测试情况,但目前只提供了BF16版本的模型。未来是否会官方提供INT4版本的模型?

zRzRzRzRzRzRzR commented 3 months ago

我注意到base内的README提到了BF16和INT4两种精度的模型显存占用和生成速度测试情况,但目前只提供了BF16版本的模型。未来是否会官方提供INT4版本的模型?

load in 4bit即可 我们也是这么跑的测试

triumph commented 3 months ago

basic_demo/openai_api_server.py 如何支持 load in 4bit 呢?

zRzRzRzRzRzRzR commented 3 months ago

哦 vllm的话不行,因为openai api_server已经默认vllm底座了 关于vllm,确实不能加载现在这种4bit 具体issue可以看到 https://github.com/vllm-project/vllm/issues/4033

triumph commented 3 months ago

trans_web_demo.py 如何支持 load in 4bit 呢?

M1saka10010 commented 3 months ago

4bit量化还会有跟glm3一样的输出混乱问题吗?

shams2023 commented 3 months ago

4bit量化还会有跟glm3一样的输出混乱问题吗?

是怎么使用4bit量化的?大佬(有代码截图吗)

M1saka10010 commented 3 months ago

4bit量化还会有跟glm3一样的输出混乱问题吗?

是怎么使用4bit量化的?大佬(有代码截图吗)

https://github.com/THUDM/GLM-4/issues/15#issuecomment-2148975639

shams2023 commented 3 months ago

4bit量化还会有跟glm3一样的输出混乱问题吗?

是怎么使用4bit量化的?大佬(有代码截图吗)

#15 (comment)

谢谢

galena01 commented 3 months ago

测试了一下,bitsandbytes量化是可以用的,用这段代码就可以导出4-bit版本

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cpu"

tokenizer = AutoTokenizer.from_pretrained("./glm-4-9b-chat", trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "./glm-4-9b-chat",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    load_in_4bit=True
).eval()
model.save_pretrained("glm-4-9b-chat-int4")
tokenizer.save_pretrained("glm-4-9b-chat-int4")
shams2023 commented 3 months ago

测试了一下,bitsandbytes量化是可以用的,用这段代码就可以导出4-bit版本

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cpu"

tokenizer = AutoTokenizer.from_pretrained("./glm-4-9b-chat", trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "./glm-4-9b-chat",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    load_in_4bit=True
).eval()
model.save_pretrained("glm-4-9b-chat-int4")
tokenizer.save_pretrained("glm-4-9b-chat-int4")

这个导出来的权重是int4权重吗?(下面就是我使用你的代码导出来的模型权重) image 有了权重之后我使用如下代码,还是出错了(代码如下:) image

swordfar commented 3 months ago

测试了一下,bitsandbytes量化是可以用的,用这段代码就可以导出4-bit版本

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cpu"

tokenizer = AutoTokenizer.from_pretrained("./glm-4-9b-chat", trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "./glm-4-9b-chat",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    load_in_4bit=True
).eval()
model.save_pretrained("glm-4-9b-chat-int4")
tokenizer.save_pretrained("glm-4-9b-chat-int4")

不错,测试了下,直接将直接将"load_in_4bit=True"加入hf.py文件是可以跑起来的

galena01 commented 3 months ago

@shams2023 您好,没错导出的是int4权重,我这里测试大约要8GB显存才能正常加载模型,推理的话显存占用还要多一些(取决于上下文长度)

maxin9966 commented 3 months ago

@zRzRzRzRzRzRzR 请问glm-4-9b能否使用autogptq或者autoawq导出对应的量化模型?

wh336699 commented 3 months ago

@shams2023

model = AutoModelForCausalLM.from_pretrained( "/home/wanhao/project/ChatGLM-9B/GLM-4/GLM-4-INT8/glm-4-9b-chat-GPTQ-Int8", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True,

load_in_4bit = True

).eval()

去掉to(device)

zs001122 commented 3 months ago

测试了一下,bitsandbytes量化是可以用的,用这段代码就可以导出4-bit版本

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cpu"

tokenizer = AutoTokenizer.from_pretrained("./glm-4-9b-chat", trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "./glm-4-9b-chat",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    load_in_4bit=True
).eval()
model.save_pretrained("glm-4-9b-chat-int4")
tokenizer.save_pretrained("glm-4-9b-chat-int4")

不错,测试了下,直接将直接将"load_in_4bit=True"加入hf.py文件是可以跑起来的

transformers用的是哪个版本

swordfar commented 3 months ago

测试了一下,bitsandbytes量化是可以用的,用这段代码就可以导出4-bit版本

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cpu"

tokenizer = AutoTokenizer.from_pretrained("./glm-4-9b-chat", trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    "./glm-4-9b-chat",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    load_in_4bit=True
).eval()
model.save_pretrained("glm-4-9b-chat-int4")
tokenizer.save_pretrained("glm-4-9b-chat-int4")

不错,测试了下,直接将直接将"load_in_4bit=True"加入hf.py文件是可以跑起来的

transformers用的是哪个版本

transformers 4.41.2

Qubitium commented 3 months ago

We got you covered with AutoGptq based 4bit quants.

https://github.com/AutoGPTQ/AutoGPTQ/pull/683

thomashooo commented 3 months ago

mac支持量化吗,运行提示: (chatglm) thomas@bogon basic_demo % python3 trans_to_4bit.py Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Traceback (most recent call last): File "/Users/thomas/Documents/Projects/AI/GLM-4/basic_demo/trans_to_bit.py", line 27, in model = AutoModelForCausalLM.from_pretrained( File "/Users/thomas/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/Users/thomas/miniconda3/envs/chatglm/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3030, in from_pretrained raise RuntimeError("No GPU found. A GPU is needed for quantization.") RuntimeError: No GPU found. A GPU is needed for quantization.

Qubitium commented 3 months ago

@thomashooo Autogptq inference does not support GPTQ quants on mac. You need to check with llamacpp to see they have a gptq kernel written for metal (apple).

shudct commented 1 month ago

哦 vllm的话不行,因为openai api_server已经默认vllm底座了 关于vllm,确实不能加载现在这种4bit 具体issue可以看到 vllm-project/vllm#4033

vllm目前支持awq,gptq,GLM4-9B可以提供这两种量化的版本吗

Qubitium commented 1 month ago

vllm目前支持awq,gptq,GLM4-9B可以提供这两种量化的版本吗

https://huggingface.co/ModelCloud

xiny0008 commented 1 month ago

我注意到base内的README提到了BF16和INT4两种精度的模型显存占用和生成速度测试情况,但目前只提供了BF16版本的模型。未来是否会官方提供INT4版本的模型?

load in 4bit即可 我们也是这么跑的测试

为什么量化后的模型是在CPU上的,是因为我的cuda版本不对吗?我使用的是12.0

alexw994 commented 1 month ago

可以看看我的这个pr https://github.com/vllm-project/vllm/pull/7672