Closed lll-zy closed 1 year ago
I've already understood. I didn't notice before that there are two tasks to choose.
@lll-zy Great! You can refer to our paper to the task setting, feel free to address any further question
Thank you for your reply! But I still don't understand how test data is applied in visualization task. In the appendix of the paper, each ground truth image correspends to one text, and there are five texts and five outputs, right? However each sample in test dataset has five frames, they correspond to the same text. I don't understand what is the ground truth and its description when generating.
@lll-zy Hi, it is not correct that "However each sample in test dataset has five frames, they correspond to the same text.". If this happen, there must exist some bug in the implementation. Could you please share some case to me?
Maybe I misunderstood. I downloaded the flintstones dataset, which includes many .npy files. And the shape is (5, 128, 128, 3). In flintstones_annotations_v1. json, each .npy corresponds to a text.
@lll-zy yep, that's true, and each text should contains 5 captions. Each caption is corresponding to a frame
Are these 5 captions also in flintstones_annotations_v1. json? I can only find 'description' for each npy. And, the content of these 5 frames in each npy seems similar, like
@lll-zy It makes sense because every frames is sampled from video, the caption is in "flintstones_annotations_v1-0.json", as you can see in: https://github.com/xichenpan/ARLDM/blob/5b03fc4cf78d6509620506a6ca1bd799d6bd9ad4/data_script/flintstones_hdf5.py#L16 You may debug our code to figure out how the dataset is organized.
I'll take a closer look. Thanks again for your answer.
First, thanks for your wonderful work. But I found that it directly outputs results without guide text when I testing the model I trained. And, each test sample in Flintstones dataset has 5 frames. I wonder how the test data are used, and where is the guide text.