Closed yizenghan closed 1 week ago
We did not train models of different sizes on UCF101. We have only trained models of different sizes on FFS. You can find pre-trained checkpoints at here.
Hi, may I ask about the number of training iters of the released ckpts? In the paper I found the plots illustrate training 150k iters. In the code I find the max training steps are 1e6.
Hi, may I ask about the number of training iters of the released ckpts? In the paper I found the plots illustrate training 150k iters. In the code I find the max training steps are 1e6.
I kind of forget exactly how many iterations. But according to the results of other people's replications, it's about 8 A100 (80G) GPU training for one week can achieve the value reported in the paper.
Hi There! 👋
This issue has been marked as stale due to inactivity for 14 days.
We would like to inquire if you still have the same problem or if it has been resolved.
If you need further assistance, please feel free to respond to this comment within the next 7 days. Otherwise, the issue will be automatically closed.
We appreciate your understanding and would like to express our gratitude for your contribution to Latte. Thank you for your support. 🙏
Hi there, could you report the FVD/IS results of different-sized Latte models on UCF-101?
If possible, the pre-trained checkpoints would be useful. Thanks!