Closed 285220927 closed 5 months ago
应该是支持 请使用
Load Checkpoint - OneDiff
节点替换掉Load Checkpoint
节点
diffusers可以支持吗,我是用sdxl controlnet inpaint的时候出现了错误
import torch
from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel
from onediff.infer_compiler import oneflow_compile
device = "cuda:0"
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained(
'stablediffusionapi/dreamshaper-xl',
controlnet=controlnet,
torch_dtype=torch.float16
)
pipe = pipe.to(device)
pipe.unet = oneflow_compile(pipe.unet)
pipe.controlnet = oneflow_compile(pipe.controlnet)
ip_model = IPAdapterPlusXL(pipe, image_encoder_path, ip_ckpt, device, num_tokens=16)
images = ip_model.generate(
pil_image=prompt_image,
num_samples=1,
num_inference_steps=30,
seed=42,
image=init_image,
mask_image=mask_image,
control_image=control_image,
strength=1.0,
controlnet_conditioning_scale=0.5, # 0.8
prompt=prompt,
scale=0.0,
)
......
File /usr/local/lib/python3.8/dist-packages/infer_compiler_registry/register_diffusers/unet_2d_condition_oflow.py:358, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, down_intrablock_additional_residuals, encoder_attention_mask, return_dict)
353 if is_adapter and len(down_intrablock_additional_residuals) > 0:
354 additional_residuals[
355 "additional_residuals"
356 ] = down_intrablock_additional_residuals.pop(0)
--> 358 sample, res_samples = downsample_block(
359 hidden_states=sample,
360 temb=emb,
361 encoder_hidden_states=encoder_hidden_states,
362 attention_mask=attention_mask,
363 cross_attention_kwargs=cross_attention_kwargs,
364 encoder_attention_mask=encoder_attention_mask,
365 **additional_residuals,
366 )
367 else:
368 if diffusers_version < diffusers_0210_v:
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:188, in ProxyModule.__call__(self, *args, **kwargs)
178 # NOTE: The original nn.Module's __call__ method is ignored, which means
179 # that hooks of nn.Modules are ignored. It is not recommended
180 # to use hooks of nn.Module in nn.Graph for the moment.
181 with graph_build_util.DebugScopeContext(
182 self.to(GraphModule)._debug_min_s_level,
183 self.to(GraphModule)._debug_max_v_level,
(...)
186 self.to(GraphModule)._debug_only_user_py_stack,
187 ):
--> 188 result = self.__block_forward(*args, **kwargs)
190 outputs = ()
191 if not (type(result) is tuple or type(result) is list):
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:238, in ProxyModule.__block_forward(self, *args, **kwargs)
232 with self.to(GraphModule).scope_context():
233 # "Instance method __func__ is the function object", "when an instance method object is called,
234 # the underlying function __func__ is called, inserting the class instance __self__ in front of
235 # the argument list."
236 # Reference: https://docs.python.org/3/reference/datamodel.html
237 unbound_forward_of_module_instance = self.to(Module).forward.__func__
--> 238 result = unbound_forward_of_module_instance(self, *args, **kwargs)
239 self.to(GraphModule)._is_executing_forward = False
240 return result
File /usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py:1160, in CrossAttnDownBlock2D.forward(self, hidden_states, temb, encoder_hidden_states, attention_mask, cross_attention_kwargs, encoder_attention_mask, additional_residuals)
1158 else:
1159 hidden_states = resnet(hidden_states, temb, scale=lora_scale)
-> 1160 hidden_states = attn(
1161 hidden_states,
1162 encoder_hidden_states=encoder_hidden_states,
1163 cross_attention_kwargs=cross_attention_kwargs,
1164 attention_mask=attention_mask,
1165 encoder_attention_mask=encoder_attention_mask,
1166 return_dict=False,
1167 )[0]
1169 # apply additional residuals to the output of the last pair of resnet and attention blocks
1170 if i == len(blocks) - 1 and additional_residuals is not None:
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:188, in ProxyModule.__call__(self, *args, **kwargs)
178 # NOTE: The original nn.Module's __call__ method is ignored, which means
179 # that hooks of nn.Modules are ignored. It is not recommended
180 # to use hooks of nn.Module in nn.Graph for the moment.
181 with graph_build_util.DebugScopeContext(
182 self.to(GraphModule)._debug_min_s_level,
183 self.to(GraphModule)._debug_max_v_level,
(...)
186 self.to(GraphModule)._debug_only_user_py_stack,
187 ):
--> 188 result = self.__block_forward(*args, **kwargs)
190 outputs = ()
191 if not (type(result) is tuple or type(result) is list):
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:238, in ProxyModule.__block_forward(self, *args, **kwargs)
232 with self.to(GraphModule).scope_context():
233 # "Instance method __func__ is the function object", "when an instance method object is called,
234 # the underlying function __func__ is called, inserting the class instance __self__ in front of
235 # the argument list."
236 # Reference: https://docs.python.org/3/reference/datamodel.html
237 unbound_forward_of_module_instance = self.to(Module).forward.__func__
--> 238 result = unbound_forward_of_module_instance(self, *args, **kwargs)
239 self.to(GraphModule)._is_executing_forward = False
240 return result
File /usr/local/lib/python3.8/dist-packages/infer_compiler_registry/register_diffusers/transformer_2d_oflow.py:793, in Transformer2DModel.forward(self, hidden_states, encoder_hidden_states, timestep, added_cond_kwargs, class_labels, cross_attention_kwargs, attention_mask, encoder_attention_mask, return_dict)
781 hidden_states = torch.utils.checkpoint.checkpoint(
782 block,
783 hidden_states,
(...)
790 use_reentrant=False,
791 )
792 else:
--> 793 hidden_states = block(
794 hidden_states,
795 attention_mask=attention_mask,
796 encoder_hidden_states=encoder_hidden_states,
797 encoder_attention_mask=encoder_attention_mask,
798 timestep=timestep,
799 cross_attention_kwargs=cross_attention_kwargs,
800 class_labels=class_labels,
801 )
803 # 3. Output
804 if self.is_input_continuous:
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:188, in ProxyModule.__call__(self, *args, **kwargs)
178 # NOTE: The original nn.Module's __call__ method is ignored, which means
179 # that hooks of nn.Modules are ignored. It is not recommended
180 # to use hooks of nn.Module in nn.Graph for the moment.
181 with graph_build_util.DebugScopeContext(
182 self.to(GraphModule)._debug_min_s_level,
183 self.to(GraphModule)._debug_max_v_level,
(...)
186 self.to(GraphModule)._debug_only_user_py_stack,
187 ):
--> 188 result = self.__block_forward(*args, **kwargs)
190 outputs = ()
191 if not (type(result) is tuple or type(result) is list):
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:238, in ProxyModule.__block_forward(self, *args, **kwargs)
232 with self.to(GraphModule).scope_context():
233 # "Instance method __func__ is the function object", "when an instance method object is called,
234 # the underlying function __func__ is called, inserting the class instance __self__ in front of
235 # the argument list."
236 # Reference: https://docs.python.org/3/reference/datamodel.html
237 unbound_forward_of_module_instance = self.to(Module).forward.__func__
--> 238 result = unbound_forward_of_module_instance(self, *args, **kwargs)
239 self.to(GraphModule)._is_executing_forward = False
240 return result
File /usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:258, in BasicTransformerBlock.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, timestep, cross_attention_kwargs, class_labels)
255 cross_attention_kwargs = cross_attention_kwargs.copy() if cross_attention_kwargs is not None else {}
256 gligen_kwargs = cross_attention_kwargs.pop("gligen", None)
--> 258 attn_output = self.attn1(
259 norm_hidden_states,
260 encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
261 attention_mask=attention_mask,
262 **cross_attention_kwargs,
263 )
264 if self.use_ada_layer_norm_zero:
265 attn_output = gate_msa.unsqueeze(1) * attn_output
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:188, in ProxyModule.__call__(self, *args, **kwargs)
178 # NOTE: The original nn.Module's __call__ method is ignored, which means
179 # that hooks of nn.Modules are ignored. It is not recommended
180 # to use hooks of nn.Module in nn.Graph for the moment.
181 with graph_build_util.DebugScopeContext(
182 self.to(GraphModule)._debug_min_s_level,
183 self.to(GraphModule)._debug_max_v_level,
(...)
186 self.to(GraphModule)._debug_only_user_py_stack,
187 ):
--> 188 result = self.__block_forward(*args, **kwargs)
190 outputs = ()
191 if not (type(result) is tuple or type(result) is list):
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:238, in ProxyModule.__block_forward(self, *args, **kwargs)
232 with self.to(GraphModule).scope_context():
233 # "Instance method __func__ is the function object", "when an instance method object is called,
234 # the underlying function __func__ is called, inserting the class instance __self__ in front of
235 # the argument list."
236 # Reference: https://docs.python.org/3/reference/datamodel.html
237 unbound_forward_of_module_instance = self.to(Module).forward.__func__
--> 238 result = unbound_forward_of_module_instance(self, *args, **kwargs)
239 self.to(GraphModule)._is_executing_forward = False
240 return result
File /usr/local/lib/python3.8/dist-packages/infer_compiler_registry/register_diffusers/attention_processor_oflow.py:364, in Attention.forward(self, hidden_states, encoder_hidden_states, attention_mask, **cross_attention_kwargs)
353 def forward(
354 self,
355 hidden_states,
(...)
361 # here we simply pass along all tensors to the selected processor class
362 # For standard processors that are defined here, `**cross_attention_kwargs` is empty
--> 364 return self.processor(
365 self,
366 hidden_states,
367 encoder_hidden_states=encoder_hidden_states,
368 attention_mask=attention_mask,
369 **cross_attention_kwargs,
370 )
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:188, in ProxyModule.__call__(self, *args, **kwargs)
178 # NOTE: The original nn.Module's __call__ method is ignored, which means
179 # that hooks of nn.Modules are ignored. It is not recommended
180 # to use hooks of nn.Module in nn.Graph for the moment.
181 with graph_build_util.DebugScopeContext(
182 self.to(GraphModule)._debug_min_s_level,
183 self.to(GraphModule)._debug_max_v_level,
(...)
186 self.to(GraphModule)._debug_only_user_py_stack,
187 ):
--> 188 result = self.__block_forward(*args, **kwargs)
190 outputs = ()
191 if not (type(result) is tuple or type(result) is list):
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/graph/proxy.py:238, in ProxyModule.__block_forward(self, *args, **kwargs)
232 with self.to(GraphModule).scope_context():
233 # "Instance method __func__ is the function object", "when an instance method object is called,
234 # the underlying function __func__ is called, inserting the class instance __self__ in front of
235 # the argument list."
236 # Reference: https://docs.python.org/3/reference/datamodel.html
237 unbound_forward_of_module_instance = self.to(Module).forward.__func__
--> 238 result = unbound_forward_of_module_instance(self, *args, **kwargs)
239 self.to(GraphModule)._is_executing_forward = False
240 return result
File /usr/local/lib/python3.8/dist-packages/oneflow/nn/modules/module.py:200, in Module.forward(self, *args, **kwargs)
199 def forward(self, *args, **kwargs):
--> 200 raise NotImplementedError()
NotImplementedError:
environment
ubuntu 20.04
gpu NVIDIA GeForce RTX 4090
python 3.8
diffusers 0.23.0
onediff 0.12.1.dev202401310124
pytorch 2.0.1
cuda 12.2
version: 0.9.1.dev20240125+cu122
git_commit: 6458a12
cmake_build_type: Release
rdma: True
mlir: True
enterprise: False
IPAdapterPlusXL 这个是来自哪里?
如果有个完整的可复现例子,我们可以看看。
当前只能看出 self.processor 有点问题,可能 self.processor 的没有实现 forward
IPAdapterPlusXL 这个是来自哪里?
如果有个完整的可复现例子,我们可以看看。
当前只能看出 self.processor 有点问题,可能 self.processor 的没有实现 forward
IPAdapterPlusXL 这里有个示例:https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb 我这儿看到的错误是NotImplementedError,后续有计划支持吗
IPAdapterPlusXL 这个是来自哪里? 如果有个完整的可复现例子,我们可以看看。 当前只能看出 self.processor 有点问题,可能 self.processor 的没有实现 forward
IPAdapterPlusXL 这里有个示例:https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb 我这儿看到的错误是NotImplementedError,后续有计划支持吗
我们节后看下 IPAdapter 相关的支持
我也用了ipadapter,如果先用普通模式跑一遍,再跑ipadapter,不会报错。但效果和不用onediff不一致。应该是设置processor成功了,但controlnet,unet还是用的普通模式编译过的静态图。但如果直接跑ipadapter,会在ipadapter的processor 的hidden_states = torch.bmm(attention_probs, value)报错。bmm(): argument 'input' (position 1) must be Tensor, not Tensor。 我想ipadapter应该是需要重新编译静态图的,毕竟网络结构修改了。有没有办法可以灵活切换两者的使用呢,比如用参数来切换使用静态图推理还是torch的forward过程? @strint
InstantID primarily utilizes the combination of ControINet and IP-Adapter to control facial features during the diffusion process. Since onediff supports InstantID, it is theoretically supported at the underlying level as well!
https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb
IPAdapterPlusXL 这个是来自哪里? 如果有个完整的可复现例子,我们可以看看。 当前只能看出 self.processor 有点问题,可能 self.processor 的没有实现 forward
IPAdapterPlusXL 这里有个示例:https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb 我这儿看到的错误是NotImplementedError,后续有计划支持吗
我们节后看下 IPAdapter 相关的支持
请问目前IPAdapter的支持情况怎样 @strint @ccssu
https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb
IPAdapterPlusXL 这个是来自哪里? 如果有个完整的可复现例子,我们可以看看。 当前只能看出 self.processor 有点问题,可能 self.processor 的没有实现 forward
IPAdapterPlusXL 这里有个示例:https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter-plus_sdxl_demo.ipynb 我这儿看到的错误是NotImplementedError,后续有计划支持吗
我们节后看下 IPAdapter 相关的支持
请问目前IPAdapter的支持情况怎样 @strint @ccssu
支持ComfyUI_IPAdapter_plus 使用 ,使用请参考 文档, @chenly15 https://github.com/siliconflow/onediff/blob/main/onediff_comfy_nodes/modules/hijack_ipadapter_plus/README.md
@strint @ccssu are you going to support IP-adapter on diffusers?
@strint @ccssu are you going to support IP-adapter on diffusers?
i think it's already support the IP-Adapter,i tried the IP-Adapter Face id,and the output is same with original diffusers.
@strint @ccssu are you going to support IP-adapter on diffusers?
@MikeHanKK what diffusers pipeline are you using?
Usually a pipeline can be accelerate with compile_pipe, you can have a try.
应该是支持 请使用
Load Checkpoint - OneDiff
节点替换掉Load Checkpoint
节点