xichenpan / ARLDM

Official Pytorch Implementation of Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
https://arxiv.org/abs/2211.10950
MIT License
182 stars 28 forks source link

Is the generation text guided? #18

Closed lll-zy closed 1 year ago

lll-zy commented 1 year ago

First, thanks for your wonderful work. But I found that it directly outputs results without guide text when I testing the model I trained. And, each test sample in Flintstones dataset has 5 frames. I wonder how the test data are used, and where is the guide text.

lll-zy commented 1 year ago

I've already understood. I didn't notice before that there are two tasks to choose.

xichenpan commented 1 year ago

@lll-zy Great! You can refer to our paper to the task setting, feel free to address any further question

lll-zy commented 1 year ago

Thank you for your reply! But I still don't understand how test data is applied in visualization task. In the appendix of the paper, each ground truth image correspends to one text, and there are five texts and five outputs, right? However each sample in test dataset has five frames, they correspond to the same text. I don't understand what is the ground truth and its description when generating.

xichenpan commented 1 year ago

@lll-zy Hi, it is not correct that "However each sample in test dataset has five frames, they correspond to the same text.". If this happen, there must exist some bug in the implementation. Could you please share some case to me?

lll-zy commented 1 year ago

Maybe I misunderstood. I downloaded the flintstones dataset, which includes many .npy files. And the shape is (5, 128, 128, 3). In flintstones_annotations_v1. json, each .npy corresponds to a text.

xichenpan commented 1 year ago

@lll-zy yep, that's true, and each text should contains 5 captions. Each caption is corresponding to a frame

lll-zy commented 1 year ago

Are these 5 captions also in flintstones_annotations_v1. json? I can only find 'description' for each npy. And, the content of these 5 frames in each npy seems similar, like 1680415545(1)

xichenpan commented 1 year ago

@lll-zy It makes sense because every frames is sampled from video, the caption is in "flintstones_annotations_v1-0.json", as you can see in: https://github.com/xichenpan/ARLDM/blob/5b03fc4cf78d6509620506a6ca1bd799d6bd9ad4/data_script/flintstones_hdf5.py#L16 You may debug our code to figure out how the dataset is organized.

lll-zy commented 1 year ago

I'll take a closer look. Thanks again for your answer.