baaivision / Emu3

Next-Token Prediction is All You Need
Apache License 2.0
1.81k stars 71 forks source link

Use 🤗Transformers to run Emu3-Chat/Stage1 for vision-language understanding的示例代码中pos_inputs未定义 #37

Closed qingchen177 closed 3 weeks ago

qingchen177 commented 3 weeks ago

在跑图片描述demo的时候,在

# generate
outputs = model.generate(
    inputs.input_ids.to("cuda:0"),
    GENERATION_CONFIG,
    attention_mask=pos_inputs.attention_mask.to("cuda:0"),
)

pos_inputs未定义

顺便希望能补充部署模型的资源使用情况最少需要多少GB显存等。

完整代码如下: Use 🤗Transformers to run Emu3-Chat/Stage1 for vision-language understanding

from PIL import Image
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor, AutoModelForCausalLM
from transformers.generation.configuration_utils import GenerationConfig
import torch

from emu3.mllm.processing_emu3 import Emu3Processor

# model path
EMU_HUB = "BAAI/Emu3-Chat"
VQ_HUB = "BAAI/Emu3-VisionTokenier"

# prepare model and processor
model = AutoModelForCausalLM.from_pretrained(
    EMU_HUB,
    device_map="cuda:0",
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
    trust_remote_code=True,
)

# used for Emu3-Chat
tokenizer = AutoTokenizer.from_pretrained(EMU_HUB, trust_remote_code=True, padding_side="left")
# used for Emu3-Stage1
# tokenizer = AutoTokenizer.from_pretrained(
#     EMU_HUB,
#     trust_remote_code=True,
#     chat_template="{image_prompt}{text_prompt}",
#     padding_side="left",
# )
image_processor = AutoImageProcessor.from_pretrained(VQ_HUB, trust_remote_code=True)
image_tokenizer = AutoModel.from_pretrained(VQ_HUB, device_map="cuda:0", trust_remote_code=True).eval()
processor = Emu3Processor(image_processor, image_tokenizer, tokenizer)

# prepare input
text = "Please describe the image"
image = Image.open("assets/demo.png")

inputs = processor(
    text=text,
    image=image,
    mode='U',
    return_tensors="pt",
    padding="longest",
)

# prepare hyper parameters
GENERATION_CONFIG = GenerationConfig(
    pad_token_id=tokenizer.pad_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=1024,
)

# generate
outputs = model.generate(
    inputs.input_ids.to("cuda:0"),
    GENERATION_CONFIG,
    attention_mask=pos_inputs.attention_mask.to("cuda:0"),
)

outputs = outputs[:, inputs.input_ids.shape[-1]:]
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
ryanzhangfan commented 3 weeks ago

pos_inputs是一个typo,应该是inputs,感谢提出,README.md中的代码部分已改正。40GB的显存能够满足chat和gen的生成需求。