Open kimtaehyeong opened 8 hours ago
Hi, thanks for your interest in our work! I found a similar issue on my side and I am still trying to figure out why (TBH I am pretty new to hugging face). Will let you know when I find a solution.
If you would like to run with a lower GPU memory, I recommend using the GitHub repo for now. You can also enable the self-ensemble by setting use_chop=True
here. This will use a sliding window to chop the input image and fuse the restored patches in the end (might be a bit slower). Hope this helps!
When I find out, I will share it with you. thank you!
That would be great, thanks!
Hello, Thank you for your good research. I have a question about the inference process.
Currently, I am testing in two ways.
In case of inference number 2, we confirmed that the GPU runs even at low specifications. However, in case 1, loading the model consumes about 10 GB of memory, and inference requires larger memory requirements. Method 2 only required about 3~5G for inference. May I know why?
thank you