Closed Youngtrue closed 2 years ago
For the 16x model, we do not train a new model due to lack of training data. Instead, we cascade (8x,2x) to form 16x models and (8x,8x) model to form 64x model. Note that it is still completely possible to train a complete end to end 64x interpolation model, just that we did not do it because we do not have enough data to train.
Hi, could you please share the pretrained model? The current links seem to be unavailable. Thanks a lot!
Hi Tarun,
It is remarkable that you make inference speed for 16x or higher factor faster than Super SloMo while performing well. Will you publicate the trained model of 16x and higher and add support afterward?