[Issue]
When loading Baichuan2-13B (dtype='bf16_int4'), the loading time is 30sec in total.
[proposal]
cache the quantized INT4 model to disk at first load, and reload it without quantization next-time.
[Observation]condition:
on 1S-SPR9468 (Quadrant, HBM Flat, use HBM node only, 48C), Model = Baichuan2-13B (ModelScope v2.0)
result:
load time: 31.70633363723755
script:
import xfastertransformer
from transformers import AutoTokenizer, TextStreamer
# Assume huggingface model dir is `/data/chatglm-6b-hf` and converted model dir is `/data/chatglm-6b-cpu`.
MODEL_PATH="/mnt/data5/modelscope-xft_Baichuan2-13B-Chat/"
TOKEN_PATH="/mnt/data5/modelscope_Baichuan2-13B-Chat/"
f_prompt = '/mnt/data5/datasets/prompt_1k.txt'
#INPUT_PROMPT = "Once upon a time, there existed a little girl who liked to have adventures."
prompt =""
with open(f_prompt, 'r') as h:
for l in h.readlines():
prompt += l.rstrip('\r\t\n ')
from typing import Tuple, List
import torch
def build_inputs_baichuan(tokenizer, query: List[str], padding, history: List[Tuple[str, str]] = []):
inputs = tokenizer(query, return_tensors="pt", padding=padding).input_ids
print(inputs, inputs.shape)
suffix = torch.tensor([[196]])
prefix = torch.tensor([[195]])
inputs = torch.cat((prefix.expand((inputs.shape[0], 1)), inputs, suffix.expand(inputs.shape[0], 1)), dim=1)
return inputs
import time
start = time.time()
tokenizer = AutoTokenizer.from_pretrained(TOKEN_PATH, use_fast=False, padding_side="left", trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True, skip_prompt=False)
input_ids = build_inputs_baichuan(tokenizer, prompt, padding=True)
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>")
print(f'input prompt = {input_ids.shape[-1]} tokens')
#input_ids = tokenizer(INPUT_PROMPT, return_tensors="pt", padding=False).input_ids
model = xfastertransformer.AutoModel.from_pretrained(MODEL_PATH, dtype='bf16_int4')
end = time.time()
print("load time: ",end-start)
model.config(max_length=1024)
model.input(input_ids)
import time
start = time.time()
output = ""
count = 0
while not model.is_done():
next_tokens = model.forward()
if count == 0:
first_dt = time.time() - start
res = tokenizer.decode(next_tokens[0])
output += res
count += 1
#print(res)
print(output)
end = time.time()
print(f'inference time: {end-start} sec')
print(f'1st token latency: {first_dt} sec')
avg_dt = (end-start - first_dt)/(count-1)
print(f'average next-token latency: {avg_dt} sec')
print(f'output tokens = {count}')
generated_ids = model.finalize()
[Issue] When loading Baichuan2-13B (dtype='bf16_int4'), the loading time is 30sec in total. [proposal] cache the quantized INT4 model to disk at first load, and reload it without quantization next-time. [Observation] condition: on 1S-SPR9468 (Quadrant, HBM Flat, use HBM node only, 48C), Model = Baichuan2-13B (ModelScope v2.0) result: load time: 31.70633363723755 script:
prompt: prompt_medical_1k.txt