YoadTew / zero-shot-image-to-text

Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic
262 stars 42 forks source link

How the entire dataset is converted into captions #15

Open shams2023 opened 1 year ago

shams2023 commented 1 year ago

Thank you very much for your work! How do you convert a image from an entire dataset into captions? I currently want to convert all the images in the entire dataset into captions, but the code only works by converting one image into caption, so I really want to know what I need to do if I want to convert an entire dataset? I really hope to receive your guidance. Thank you again

xiaozhi1233 commented 9 months ago

Thank you very much for your work! How do you convert a image from an entire dataset into captions? I currently want to convert all the images in the entire dataset into captions, but the code only works by converting one image into caption, so I really want to know what I need to do if I want to convert an entire dataset? I really hope to receive your guidance. Thank you again

您好,请问您将一张图像转换为标题运行成功了吗,能否给我提供一些指导,若得到回复将不胜感激!

shams2023 commented 9 months ago

非常感谢您的工作!如何将整个数据集中的图像转换为标题?我目前想将整个数据集中的所有图像转换为标题,但代码只能通过将一张图像转换为标题来工作,所以我真的很想知道如果我想转换整个数据集,我需要做什么?我真的希望得到你的指导。再次感谢你

您好,请问您将一张图像转换为标题运行成功了吗,能否给我提供一些指导,若得到回复将不胜感激!

import torch import argparse import clip from pathlib import Path from model.ZeroCLIP import CLIPTextGenerator

def perplexity_score(text, lm_model, lm_tokenizer, device): encodings = lm_tokenizer(f'{lm_tokenizer.bos_token + text}', return_tensors='pt') input_ids = encodings.input_ids.to(device) target_ids = input_ids.clone()

outputs = lm_model(input_ids, labels=target_ids)
log_likelihood = outputs[0]
ll = log_likelihood.item()

return ll

class ImageCaptioner: def init(self): self.args = self.get_args() self.args.reset_context_delta = True self.text_generator = CLIPTextGenerator(**vars(self.args))

def caption_image(self, image, cond_text='Image of a', beam_size=5, end_factor=1.01, max_seq_length=15, ce_loss_scale=0.2):
    # image:测试图像的路径

    self.args.cond_text = cond_text
    self.text_generator.end_factor = end_factor
    self.text_generator.target_seq_length = max_seq_length
    self.text_generator.ce_scale = ce_loss_scale

    b = [str(image)]                   
    print(b)

    image_features = self.text_generator.get_img_feature([str(image)], None)
    # (1,512)tensor张量

    captions = self.text_generator.run(image_features, self.args.cond_text, beam_size=beam_size)

    # CLIP SCORE
    encoded_captions = [self.text_generator.clip.encode_text(clip.tokenize(c).to(self.text_generator.device))for c in captions]
    encoded_captions = [x / x.norm(dim=-1, keepdim=True) for x in encoded_captions]
    best_clip_idx = (torch.cat(encoded_captions) @ image_features.t()).squeeze().argmax().item()

    # Perplexity SCORE
    ppl_scores = [perplexity_score(x, self.text_generator.lm_model, self.text_generator.lm_tokenizer, self.text_generator.device) for x in captions]
    best_ppl_index = torch.tensor(ppl_scores).argmin().item()

    best_clip_caption = self.args.cond_text + captions[best_clip_idx]
    best_mixed = self.args.cond_text + captions[0]
    best_PPL = self.args.cond_text + captions[best_ppl_index]

    final = f'Best CLIP: {best_clip_caption} \nBest fluency: {best_PPL} \nBest mixed: {best_mixed}'

    return final

def get_args(self):
    parser = argparse.ArgumentParser()

    parser.add_argument("--seed", type=int, default=0)
    parser.add_argument("--lm_model", type=str, default="gpt-2", help="gpt-2 or gpt-neo")
    parser.add_argument("--clip_checkpoints", type=str,
                        default="zero-shot-image-to-text-main_3/clip_checkpoints",help="path to CLIP")

    parser.add_argument("--target_seq_length", type=int, default=20)
    parser.add_argument("--cond_text", type=str, default="Image of a")
    parser.add_argument("--reset_context_delta", default="True", action="store_true",
                        help="Should we reset the context at each token gen")
    parser.add_argument("--num_iterations", type=int, default=5)
    parser.add_argument("--clip_loss_temperature", type=float, default=0.01)
    parser.add_argument("--clip_scale", type=float, default=1)
    parser.add_argument("--ce_scale", type=float, default=0.2)
    parser.add_argument("--stepsize", type=float, default=0.3)
    parser.add_argument("--grad_norm_factor", type=float, default=0.9)
    parser.add_argument("--fusion_factor", type=float, default=0.99)
    parser.add_argument("--repetition_penalty", type=float, default=1)
    parser.add_argument("--end_token", type=str, default=".", help="Token to end text")
    parser.add_argument("--end_factor", type=float, default=1.01, help="Factor to increase end_token")
    parser.add_argument("--forbidden_factor", type=float, default=20, help="Factor to decrease forbidden tokens")
    parser.add_argument("--beam_size", type=int, default=5)

    args = parser.parse_args('')
    return args

使用方法

captioner = ImageCaptioner()

实例化一个对象

ImageCaptioner()是自己定义的一个类

测试图像的路径

image_path = "D:/20.jpg"

调用类中的方法,获得字幕caption

把测试图像的路径传过去了

caption = captioner.caption_image(image_path)

打印出来这个字幕

print(caption)

xiaozhi1233 commented 9 months ago

非常感谢您的工作!如何将整个数据集中的图像转换为标题?我目前想将整个数据集中的所有图像转换为标题,但代码只能通过将一张图像转换为标题来工作,所以我真的很想知道如果我想转换整个数据集,我需要做什么?我真的希望得到你的指导。再次感谢你

您好,请问您将一张图像转换为标题运行成功了吗,能否给我提供一些指导,若得到回复将不胜感激!

import torch import argparse import clip from pathlib import Path from model.ZeroCLIP import CLIPTextGenerator

def perplexity_score(text, lm_model, lm_tokenizer, device): encodings = lm_tokenizer(f'{lm_tokenizer.bos_token + text}', return_tensors='pt') input_ids = encodings.input_ids.to(device) target_ids = input_ids.clone()

outputs = lm_model(input_ids, labels=target_ids)
log_likelihood = outputs[0]
ll = log_likelihood.item()

return ll

class ImageCaptioner: def init(self): self.args = self.get_args() self.args.reset_context_delta = True self.text_generator = CLIPTextGenerator(**vars(self.args))

def caption_image(self, image, cond_text='Image of a', beam_size=5, end_factor=1.01, max_seq_length=15, ce_loss_scale=0.2):
    # image:测试图像的路径

    self.args.cond_text = cond_text
    self.text_generator.end_factor = end_factor
    self.text_generator.target_seq_length = max_seq_length
    self.text_generator.ce_scale = ce_loss_scale

    b = [str(image)]                   
    print(b)

    image_features = self.text_generator.get_img_feature([str(image)], None)
    # (1,512)tensor张量

    captions = self.text_generator.run(image_features, self.args.cond_text, beam_size=beam_size)

    # CLIP SCORE
    encoded_captions = [self.text_generator.clip.encode_text(clip.tokenize(c).to(self.text_generator.device))for c in captions]
    encoded_captions = [x / x.norm(dim=-1, keepdim=True) for x in encoded_captions]
    best_clip_idx = (torch.cat(encoded_captions) @ image_features.t()).squeeze().argmax().item()

    # Perplexity SCORE
    ppl_scores = [perplexity_score(x, self.text_generator.lm_model, self.text_generator.lm_tokenizer, self.text_generator.device) for x in captions]
    best_ppl_index = torch.tensor(ppl_scores).argmin().item()

    best_clip_caption = self.args.cond_text + captions[best_clip_idx]
    best_mixed = self.args.cond_text + captions[0]
    best_PPL = self.args.cond_text + captions[best_ppl_index]

    final = f'Best CLIP: {best_clip_caption} \nBest fluency: {best_PPL} \nBest mixed: {best_mixed}'

    return final

def get_args(self):
    parser = argparse.ArgumentParser()

    parser.add_argument("--seed", type=int, default=0)
    parser.add_argument("--lm_model", type=str, default="gpt-2", help="gpt-2 or gpt-neo")
    parser.add_argument("--clip_checkpoints", type=str,
                        default="zero-shot-image-to-text-main_3/clip_checkpoints",help="path to CLIP")

    parser.add_argument("--target_seq_length", type=int, default=20)
    parser.add_argument("--cond_text", type=str, default="Image of a")
    parser.add_argument("--reset_context_delta", default="True", action="store_true",
                        help="Should we reset the context at each token gen")
    parser.add_argument("--num_iterations", type=int, default=5)
    parser.add_argument("--clip_loss_temperature", type=float, default=0.01)
    parser.add_argument("--clip_scale", type=float, default=1)
    parser.add_argument("--ce_scale", type=float, default=0.2)
    parser.add_argument("--stepsize", type=float, default=0.3)
    parser.add_argument("--grad_norm_factor", type=float, default=0.9)
    parser.add_argument("--fusion_factor", type=float, default=0.99)
    parser.add_argument("--repetition_penalty", type=float, default=1)
    parser.add_argument("--end_token", type=str, default=".", help="Token to end text")
    parser.add_argument("--end_factor", type=float, default=1.01, help="Factor to increase end_token")
    parser.add_argument("--forbidden_factor", type=float, default=20, help="Factor to decrease forbidden tokens")
    parser.add_argument("--beam_size", type=int, default=5)

    args = parser.parse_args('')
    return args

使用方法

captioner = ImageCaptioner()

实例化一个对象

ImageCaptioner()是自己定义的一个类

测试图像的路径

image_path = "D:/20.jpg"

调用类中的方法,获得字幕caption

把测试图像的路径传过去了

caption = captioner.caption_image(image_path)

打印出来这个字幕

print(caption)

非常感谢!!

shams2023 commented 9 months ago

非常感谢您的工作!如何将整个数据集中的图像转换为标题?我目前想将整个数据集中的所有图像转换为标题,但代码只能通过将一张图像转换为标题来工作,所以我真的很想知道如果我想转换整个数据集,我需要做什么?我真的希望得到你的指导。再次感谢你

您好,请问您将一张图像转换为标题运行成功了吗,能否给我提供一些指导,若得到回复将不胜感激!

import torch import argparse import clip from pathlib import Path from model.ZeroCLIP import CLIPTextGenerator def perplexity_score(text, lm_model, lm_tokenizer, device): encodings = lm_tokenizer(f'{lm_tokenizer.bos_token + text}', return_tensors='pt') input_ids = encodings.input_ids.to(device) target_ids = input_ids.clone()

outputs = lm_model(input_ids, labels=target_ids)
log_likelihood = outputs[0]
ll = log_likelihood.item()

return ll

class ImageCaptioner: def init(self): self.args = self.get_args() self.args.reset_context_delta = True self.text_generator = CLIPTextGenerator(**vars(self.args))

def caption_image(self, image, cond_text='Image of a', beam_size=5, end_factor=1.01, max_seq_length=15, ce_loss_scale=0.2):
    # image:测试图像的路径

    self.args.cond_text = cond_text
    self.text_generator.end_factor = end_factor
    self.text_generator.target_seq_length = max_seq_length
    self.text_generator.ce_scale = ce_loss_scale

    b = [str(image)]                   
    print(b)

    image_features = self.text_generator.get_img_feature([str(image)], None)
    # (1,512)tensor张量

    captions = self.text_generator.run(image_features, self.args.cond_text, beam_size=beam_size)

    # CLIP SCORE
    encoded_captions = [self.text_generator.clip.encode_text(clip.tokenize(c).to(self.text_generator.device))for c in captions]
    encoded_captions = [x / x.norm(dim=-1, keepdim=True) for x in encoded_captions]
    best_clip_idx = (torch.cat(encoded_captions) @ image_features.t()).squeeze().argmax().item()

    # Perplexity SCORE
    ppl_scores = [perplexity_score(x, self.text_generator.lm_model, self.text_generator.lm_tokenizer, self.text_generator.device) for x in captions]
    best_ppl_index = torch.tensor(ppl_scores).argmin().item()

    best_clip_caption = self.args.cond_text + captions[best_clip_idx]
    best_mixed = self.args.cond_text + captions[0]
    best_PPL = self.args.cond_text + captions[best_ppl_index]

    final = f'Best CLIP: {best_clip_caption} \nBest fluency: {best_PPL} \nBest mixed: {best_mixed}'

    return final

def get_args(self):
    parser = argparse.ArgumentParser()

    parser.add_argument("--seed", type=int, default=0)
    parser.add_argument("--lm_model", type=str, default="gpt-2", help="gpt-2 or gpt-neo")
    parser.add_argument("--clip_checkpoints", type=str,
                        default="zero-shot-image-to-text-main_3/clip_checkpoints",help="path to CLIP")

    parser.add_argument("--target_seq_length", type=int, default=20)
    parser.add_argument("--cond_text", type=str, default="Image of a")
    parser.add_argument("--reset_context_delta", default="True", action="store_true",
                        help="Should we reset the context at each token gen")
    parser.add_argument("--num_iterations", type=int, default=5)
    parser.add_argument("--clip_loss_temperature", type=float, default=0.01)
    parser.add_argument("--clip_scale", type=float, default=1)
    parser.add_argument("--ce_scale", type=float, default=0.2)
    parser.add_argument("--stepsize", type=float, default=0.3)
    parser.add_argument("--grad_norm_factor", type=float, default=0.9)
    parser.add_argument("--fusion_factor", type=float, default=0.99)
    parser.add_argument("--repetition_penalty", type=float, default=1)
    parser.add_argument("--end_token", type=str, default=".", help="Token to end text")
    parser.add_argument("--end_factor", type=float, default=1.01, help="Factor to increase end_token")
    parser.add_argument("--forbidden_factor", type=float, default=20, help="Factor to decrease forbidden tokens")
    parser.add_argument("--beam_size", type=int, default=5)

    args = parser.parse_args('')
    return args

使用方法

captioner = ImageCaptioner()

实例化一个对象

ImageCaptioner()是自己定义的一个类

测试图像的路径

image_path = "D:/20.jpg"

调用类中的方法,获得字幕caption

把测试图像的路径传过去了

caption = captioner.caption_image(image_path)

打印出来这个字幕

print(caption)

非常感谢!!

你获得的字幕的效果好吗?兄弟,可以交流的,正好最近我也一直在忙这个任务!