Closed imperator-maximus closed 3 months ago
I tested it on GPU here:
How can it run on the GPU? Can you explain it in detail?
sure
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
vit_matte_model.model.to(device)
and
with torch.no_grad():
inputs = {k: v.to(device) for k, v in inputs.items()}
sure
Thank you, I successfully ran it. And I realized that large image can lead to exceeding the VRAM, and I will find a solution for this.
maybe running in 1024x1024 max and using guided filter (which you already have) for hi-res? this is code of a node I made: https://github.com/flyingdogsoftware/gyre_for_comfyui/blob/master/inspyrenet_pipeline.py it is using Inspyrenet for background removal and so it might be a similar situation here.
maybe running in 1024x1024 max and using guided filter (which you already have) for hi-res?
Thank you. The nodes has been updated. BTW: I tested the edge detail of down-sampling, there is no difference between using guidefilter and directly resize. : )
thank you - works great 👍
I tested it on GPU here: https://github.com/chflame163/ComfyUI_LayerStyle/blob/9cdac00ee1f2d75a04780462e7904d4ac04e1382/py/imagefunc.py#L1417
and it was 5 times faster. Can you change it (also for input tensors line below)? Thank you!