THUDM / CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
https://codegeex.cn
Apache License 2.0
7.6k stars 536 forks source link

多gpu、fastllm和量化不能同时使用吗 #78

Open wozwdaqian opened 11 months ago

wozwdaqian commented 11 months ago

按照这个代码来看,多gpu、fastllm和量化不能同时使用吗

def get_model(args):
    if not args.cpu:
        if torch.cuda.is_available():
            device = f"cuda:{args.gpu}"
        elif torch.backends.mps.is_built():
            device = "mps"
        else:
            device = "cpu"
    else:
        device = "cpu"

    tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True)

    if args.n_gpus > 1 and enable_multiple_gpus:
        # 如需实现多显卡模型加载,传入"n_gpus"为需求的显卡数量 / To enable Multiple GPUs model loading, please adjust "n_gpus" to the desired number of graphics cards.
        print(f"Runing on {args.n_gpus} GPUs.")
        model = load_model_on_gpus(args.model_path, num_gpus=args.n_gpus)
        model = model.eval()
    elif enable_chatglm_cpp and args.chatglm_cpp:
        print("Using chatglm-cpp to improve performance")
        dtype = "f16"
        if args.quantize in [4, 5, 8]:
            dtype = f"q{args.quantize}_0"
        model = chatglm_cpp.Pipeline(args.model_path, dtype=dtype)
    else:
        model = AutoModel.from_pretrained(args.model_path, trust_remote_code=True)
        model = model.eval()

        if enable_fastllm and args.fastllm:
            print("fastllm enabled.")
            model = model.half()
            llm.set_device_map(device)
            if args.quantize in [4, 8]:
                model = llm.from_hf(model, dtype=f"int{args.quantize}")
            else:
                model = llm.from_hf(model, dtype="float16")
        else:
            print("chatglm-cpp and fastllm not installed, using transformers.")
            if args.quantize in [4, 8]:
                print(f"Model is quantized to INT{args.quantize} format.")
                model = model.half().quantize(args.quantize)
            model = model.to(device)

    return tokenizer, model
Stanislas0 commented 11 months ago

fastllm可以支持多卡推理,还没有经过测试,可以参考:https://github.com/ztxz16/fastllm#fastllm_pytools%E4%B8%AD%E4%BD%BF%E7%94%A8%E5%A4%9A%E5%8D%A1%E9%83%A8%E7%BD%B2