OpenGVLab / InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
https://internvl.readthedocs.io/en/latest/
MIT License
6.15k stars 478 forks source link

识别图片出现 输出结果 乱码问题 && The output result of the image is garbled #291

Closed xiaotaozi121096 closed 2 months ago

xiaotaozi121096 commented 5 months ago

https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5 下载上述文件,按照要求,下载transformers==4.37.2的版本,运行Model Usage中的代码,输出结果乱码 请问有办法解决吗?

Download the above file, download the transformers==4.37.2 version according to the requirements, run the code in Model Usage, and output the garbled code Is there a solution?

xiaotaozi121096 commented 5 months ago

123

czczup commented 4 months ago

请问您能贴一下这个运行的代码吗,我这里复现不出来这个问题

JeffRody commented 4 months ago

同样的问题 A卡 运行代码如下: import torch import torchvision.transforms as T from PIL import Image from torchvision.transforms.functional import InterpolationMode from modelscope import AutoModel, AutoTokenizer from decord import VideoReader, cpu import numpy as np import os import math

IMAGENET_MEAN = (0.485, 0.456, 0.406) IMAGENET_STD = (0.229, 0.224, 0.225)

def build_transform(input_size): MEAN, STD = IMAGENET_MEAN, IMAGENET_STD transform = T.Compose([ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), T.ToTensor(), T.Normalize(mean=MEAN, std=STD) ]) return transform

def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): best_ratio_diff = float('inf') best_ratio = (1, 1) area = width height for ratio in target_ratios: target_aspect_ratio = ratio[0] / ratio[1] ratio_diff = abs(aspect_ratio - target_aspect_ratio) if ratio_diff < best_ratio_diff: best_ratio_diff = ratio_diff best_ratio = ratio elif ratio_diff == best_ratio_diff: if area > 0.5 image_size image_size ratio[0] * ratio[1]: best_ratio = ratio return best_ratio

def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False): orig_width, orig_height = image.size aspect_ratio = orig_width / orig_height

# calculate the existing image aspect ratio
target_ratios = set(
    (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
    i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])

# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
    aspect_ratio, target_ratios, orig_width, orig_height, image_size)

# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]

# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
    box = (
        (i % (target_width // image_size)) * image_size,
        (i // (target_width // image_size)) * image_size,
        ((i % (target_width // image_size)) + 1) * image_size,
        ((i // (target_width // image_size)) + 1) * image_size
    )
    # split the image
    split_img = resized_img.crop(box)
    processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
    thumbnail_img = image.resize((image_size, image_size))
    processed_images.append(thumbnail_img)
return processed_images

def load_image(image_file, input_size=448, max_num=6): image = Image.open(image_file).convert('RGB') transform = build_transform(input_size=input_size) images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) pixel_values = [transform(image) for image in images] pixel_values = torch.stack(pixel_values) return pixel_values

def split_model(model_name): device_map = {} world_size = torch.cuda.device_count() num_layers = {'InternVL2-8B': 32, 'InternVL2-26B': 48, 'InternVL2-40B': 60, 'InternVL2-Llama3-76B': 80}[model_name]

Since the first GPU will be used for ViT, treat it as half a GPU.

num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
    for j in range(num_layer):
        device_map[f'language_model.model.layers.{layer_cnt}'] = i
        layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0

return device_map

if name == 'main':

os.environ["HIP_VISIBLE_DEVICES"] = "1,2,3,4,5,6"

model_path = '/home/wanglch/projects/InternVL/InternVL2-26B'

image_path = '/home/wanglch/projects/InternVL/images/fp.jpg'

device_map = split_model('InternVL2-26B')

model = AutoModel.from_pretrained(model_path,
        torch_dtype=torch.float16,
        trust_remote_code=True,
        low_cpu_mem_usage=True,
        device_map=device_map
).eval()

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# set the max number of tiles in `max_num`

pixel_values = load_image(image_path, max_num=6).to(torch.float16).cuda()

generation_config = dict(
    num_beams=1,
    max_new_tokens=1024,
    do_sample=False,
)

pure-text conversation (纯文本对话)

question = '你是谁?' response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True) print(f'User: {question}') print(f'Assistant: {response}')

single-image single-round conversation (单图单轮对话)

question = 'OCR这张图片的文字信息' response = model.chat(tokenizer, pixel_values, question, generation_config) print(f'User: {question}') print(f'Assistant: {response}')

single-image multi-round conversation (单图多轮对话)

question = 'OCR这张图片的文字信息' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True) print(f'User: {question}') print(f'Assistant: {response}')

question = '购买方名称是什么?' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True) print(f'User: {question}') print(f'Assistant: {response}')

iamyanyanyan commented 3 months ago

请问有解决吗

czczup commented 3 months ago

HIP_VISIBLE_DEVICES

请问你也遇到这个问题了?

czczup commented 3 months ago

看样子这里用的是AMD的显卡,我们也不清楚具体是什么问题,在Nvidia的显卡上是可以正常运行的。

iamyanyanyan commented 3 months ago

主要是模型没加载好导致的,我通过设置model-name,解决了这个问题。我的命令是这样的: lmdeploy serve gradio 我的模型地址 —model-name internvl-internlm2 —model-format awq —server-name 0.0.0.0 —server-port 7865 我用的量化模型,所以加了model-format awq 

发自我的iPhone

------------------ 原始邮件 ------------------ 发件人: Zhe Chen @.> 发送时间: 2024年8月20日 13:30 收件人: OpenGVLab/InternVL @.> 抄送: iamyanyanyan @.>, Comment @.> 主题: Re: [OpenGVLab/InternVL] 识别图片出现 输出结果 乱码问题 && The output result of the image is garbled (Issue #291)

HIP_VISIBLE_DEVICES

请问你也遇到这个问题了?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

xiaotaozi121096 commented 3 months ago

看样子这里用的是AMD的显卡,我们也不清楚具体是什么问题,在Nvidia的显卡上是可以正常运行的。

是的 AMD显卡

xiaotaozi121096 commented 3 months ago

主要是模型没加载好导致的,我通过设置model-name,解决了这个问题。我的命令是这样的: lmdeploy serve gradio 我的模型地址 —model-name internvl-internlm2 —model-format awq —server-name 0.0.0.0 —server-port 7865 我用的量化模型,所以加了model-format awq  发自我的iPhone ------------------ 原始邮件 ------------------ 发件人: Zhe Chen @.> 发送时间: 2024年8月20日 13:30 收件人: OpenGVLab/InternVL @.> 抄送: iamyanyanyan @.>, Comment @.> 主题: Re: [OpenGVLab/InternVL] 识别图片出现 输出结果 乱码问题 && The output result of the image is garbled (Issue #291) HIP_VISIBLE_DEVICES 请问你也遇到这个问题了? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

我再把模型下下来试试,前些天删掉了

czczup commented 2 months ago

主要是模型没加载好导致的,我通过设置model-name,解决了这个问题。我的命令是这样的: lmdeploy serve gradio 我的模型地址 —model-name internvl-internlm2 —model-format awq —server-name 0.0.0.0 —server-port 7865 我用的量化模型,所以加了model-format awq  发自我的iPhone ------------------ 原始邮件 ------------------ 发件人: Zhe Chen @.> 发送时间: 2024年8月20日 13:30 收件人: OpenGVLab/InternVL @.> 抄送: iamyanyanyan @.>, Comment @.> 主题: Re: [OpenGVLab/InternVL] 识别图片出现 输出结果 乱码问题 && The output result of the image is garbled (Issue #291) HIP_VISIBLE_DEVICES 请问你也遇到这个问题了? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

好的,感谢反馈