horseee / DeepCache

[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
https://horseee.github.io/Diffusion_DeepCache/
Apache License 2.0
687 stars 32 forks source link

[issue] Compatibility with torch.compile() if torch >= 2.0 #26

Open jyoung105 opened 4 months ago

jyoung105 commented 4 months ago

Thanks for great work once again.

I would like to ask you, whether it can work with torch.compile().

If it is, maybe it can work so faster. I got an error below when I combine with together.

Unsupported: class property UNet2DConditionModel getset_descriptor

from user code:
   File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 211, in __getattr__
    is_in_config = "_internal_dict" in self.__dict__ and hasattr(self.__dict__["_internal_dict"], name)

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
horseee commented 4 months ago

Hi @jyoung105,

DeepCache is a dynamic model inference algorithm, which makes it incompatible with torch.compile(). One possible solution (though I'm not certain) is to split the entire pipeline into two models: one for the whole network inference and the other for partial network inference (the shallow on). This way, both of these models can be converted into static models, thus supporting torch.compile().

jyoung105 commented 4 months ago

I think the one you say is on the project 'onediff'. They enable both torch.compile() and your awesome project. And they ask you to set their way to deal with memory named oneflow, too. Thanks for your kindness!

HuiZhang0812 commented 3 months ago

Hello, do you have any solution now?

jyoung105 commented 3 months ago

Hi, I think you should check it on onediff. (https://github.com/siliconflow/onediff[https://github.com/siliconflow/onediff]) They split the code for compiler and deepcache and make them compatible with each other. I didn't read the code in detail due to busy works in recentrly, but if you need, I will check it and share how codes are working.