Open airsxue opened 4 months ago
请问我用这个demo在linux运行v2需要安装哪些环境?谢谢 import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/DeepSeek-V2-Chat" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
max_memory
should be set based on your devicesmax_memory = {i: "75GB" for i in range(8)}
device_map
cannot be set toauto
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id
messages = [ {"role": "user", "content": "Write a piece of quicksort code in C++"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result)
同问
需要安装的python库包括:torch
,transformers
,accelerate
。
相关库不同版本的兼容性并没有详细测过,这里可以给一个我测试用的环境供参考,不一定需要严格符合:
torch == 2.1.0
transformers == 4.39.3
accelerate == 0.29.3
请问我用这个demo在linux运行v2需要安装哪些环境?谢谢 import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/DeepSeek-V2-Chat" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
max_memory
should be set based on your devicesmax_memory = {i: "75GB" for i in range(8)}
device_map
cannot be set toauto
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id
messages = [ {"role": "user", "content": "Write a piece of quicksort code in C++"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result)