Thank you very much for your open source contribution, which is very helpful for my current work!
But I encountered some problems. In version 1.2, when inferring the 93x480p version with an A800 80G GPU, the memory of one A800 GPU is not enough to support the inference task, and two A800s are needed to infer at the same time. Is this a normal situation?
If this is a normal situation, could you please consider doing some memory optimization work in the future, which would be very helpful for secondary development.
Thank you again for your open source work!
Thank you very much for your open source contribution, which is very helpful for my current work! But I encountered some problems. In version 1.2, when inferring the 93x480p version with an A800 80G GPU, the memory of one A800 GPU is not enough to support the inference task, and two A800s are needed to infer at the same time. Is this a normal situation? If this is a normal situation, could you please consider doing some memory optimization work in the future, which would be very helpful for secondary development. Thank you again for your open source work!