sangmin-git / LMC-Memory

Official PyTorch implementation of "Video Prediction Recalling Long-term Motion Context via Memory Alignment Learning" (CVPR 2021 Oral)
Other
86 stars 22 forks source link

Output Channels #15

Open astraeus1258 opened 2 years ago

astraeus1258 commented 2 years ago

I try my own dataset but find out that the outputs are 1-channel pictures, but your paper use Human 3.6M dataset and can work and print RGB pictures, I'm confirmed that I've changed the channel to 3, but still got 1-channel outputs and I don't know why, so could you please tell me why? That would means a lot to me

sangmin-git commented 2 years ago

It can be simply solved according to #11

Thanks!

astraeus1258 commented 2 years ago

Thank you so much for your reply!

astraeus1258 commented 2 years ago

It can be simply solved according to #11

Thanks!

Hi! I succeed to transform the pictures to RGB, but I still got some problems, I would appreciate it if you could reply! I want to use n pictures as input and predict (16-n) frames, that means I should have 16 pictures in every folder. Like the examples you give: movingmnist ├── train │ ├── video_00000 │ │ ├── frame_00000.jpg ... │ │ ├── frame_xxxxx.jpg ... │ ├── video_xxxxx I should have lots of video folders and each folder has 16 frames. But I found it failed if I only use 16 frames, it will shows the error "Stop Iteration" , and I don't know how to solve it, should I change the code? I try 160 frames can work but that doesn't meet my needs. So do you have any advice for me? I'm looking forward to your reply!