siliconflow / onediff

OneDiff: An out-of-the-box acceleration library for diffusion models.
https://github.com/siliconflow/onediff/wiki
Apache License 2.0
1.72k stars 107 forks source link

[Bug] The second API call will fail after switching model from SD to SDXL, vice versa. #1085

Open Ralphhtt opened 3 months ago

Ralphhtt commented 3 months ago

Your current environment information

Collecting environment information... PyTorch version: 2.1.2+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A

OneFlow version: path: ['/workspace/venv/lib/python3.10/site-packages/oneflow'], version: 0.9.1.dev20240802+cu118, git_commit: d23c061, cmake_build_type: Release, rdma: True, mlir: True, enterprise: False Nexfort version: none OneDiff version: 0.0.0 OneDiffX version: none

OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.4.0-182-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Nvidia driver version: 535.161.08 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: AuthenticAMD Model name: AMD EPYC 7C13 64-Core Processor CPU family: 25 Model: 1 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 1 Stepping: 1 BogoMIPS: 3999.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid arch_capabilities Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 3 MiB (48 instances) L1i cache: 3 MiB (48 instances) L2 cache: 24 MiB (48 instances) L3 cache: 768 MiB (48 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] diffusers==0.30.0 [pip3] numpy==1.26.2 [pip3] onnx==1.16.2 [pip3] onnxruntime==1.17.1 [pip3] open-clip-torch==2.20.0 [pip3] pytorch-lightning==1.9.4 [pip3] torch==2.1.2+cu118 [pip3] torchaudio==2.1.2+cu118 [pip3] torchdiffeq==0.2.3 [pip3] torchmetrics==1.3.1 [pip3] torchsde==0.2.6 [pip3] torchvision==0.16.2+cu118 [pip3] transformers==4.30.2 [pip3] triton==2.1.0

🐛 Describe the bug

I try to call txt2img API with two configs, the only difference is the model (SD vs SDXL). The one for SD model is { "params": { "override_settings": { "sd_model_checkpoint": "v1-5-pruned.ckpt", "CLIP_stop_at_last_layers": 2 }, "prompt": [ "cat" ], "negative_prompt": [ ], "steps": 25, "count": 1, "sampler_name": "DPM++ 2M Karras", "seed": "-1", "cfg_scale": 8, "restore_faces": false, "width": 768, "height": 1024, "script_name": "onediff_diffusion_model" } }, and for SDXL { "params": { "override_settings": { "sd_model_checkpoint": "sd_xl_base_1.0.safetensors", "CLIP_stop_at_last_layers": 2, }, "prompt": ["cat" ], "negative_prompt": [ ], "steps": 25, "count": 1, "sampler_name": "DPM++ 2M Karras", "seed": "-1", "cfg_scale": 8, "restore_faces": false, "width": 768, "height": 1024, "script_name": "onediff_diffusion_model" } }

After call of SD, I try to call txt2img API with SDXL. For the first call, it's ok. However, call (with SDXL) again, it failed.

Error log is as follows:

WARNING [2024-08-14 08:17:24] /workspace/onediff/src/onediff/infer_compiler/backends/oneflow/args_tree_util.py:63 - Input structure key f2e910 to 118412 has changed. Resetting the deployable module graph. This may slow down the process. ERROR building graph got error. ERROR [2024-08-14 08:17:24] /workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:44 - Exception in forward: e=AssertionError('must specify y if and only if the model is class-conditional') WARNING [2024-08-14 08:17:24] /workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:45 - Recompile oneflow module ... 0%| | 0/25 [00:00<?, ?it/s] Couldn't find VAE named ; using None instead *** API error: POST: http://127.0.0.1/sdapi/v1/txt2img {'error': 'NotImplementedError', 'detail': '', 'body': '', 'errors': "Transform failed of <class 'ldm.modules.diffusionmodules.openaimodel.UNetModel'>: Transform failed of <class 'torch.nn.modules.container.Sequential'>: Transform failed of <class 'torch.nn.modules.linear.Linear'>: Transform failed of <class 'torch.nn.parameter.Parameter'>: Cannot pack tensors on meta"} Traceback (most recent call last): File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 98, in receive return self.receive_nowait() File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 93, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 78, in call_next
    message = await recv_stream.receive()
  File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 118, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 186, in exception_handling
    return await call_next(request)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 84, in call_next
    raise app_exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 108, in __call__
    response = await self.dispatch_func(request, call_next)
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 150, in log_and_time
    res: Response = await call_next(req)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 84, in call_next
    raise app_exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/workspace/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 479, in text2imgapi
    processed = scripts.scripts_txt2img.run(p, *p.script_args) # Need to pass args as list here
  File "/workspace/stable-diffusion-webui/modules/scripts.py", line 780, in run
    processed = script.run(p, *script_args)
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/onediff_utils.py", line 147, in wrapper
    return func(
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/onediff_controlnet/compile.py", line 19, in wrapper
    return func(self, p, *arg, **kwargs)
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/scripts/onediff.py", line 166, in run
    proc = process_images(p)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 847, in process_images
    res = process_images_inner(p)
  File "/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 988, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 1346, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
    return func()
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/workspace/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 249, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_models_xl.py", line 43, in apply_model
    return self.model(x, t, cond)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 34, in __call__
    return self.__sub_func(self.__orig_func, *args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_unet.py", line 50, in apply_model
    result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py", line 28, in forward
    return self.diffusion_model(
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 48, in wrapper
    return func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/online_quantization_utils.py", line 65, in wrapper
    output = func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/graph_management_utils.py", line 123, in wrapper
    ret = func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/args_tree_util.py", line 70, in wrapper
    output = func(self, *mapped_args, **mapped_kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 152, in forward
    dpl_graph = self.get_graph()
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 118, in get_graph
    self._deployable_module_model.oneflow_module,
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/dual_module.py", line 30, in oneflow_module
    self._oneflow_module = torch2oflow(self._torch_module)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/transform/builtin_transform.py", line 43, in wrapper
    raise NotImplementedError(f"Transform failed of {type(first_param)}: {e}")
NotImplementedError: Transform failed of <class 'ldm.modules.diffusionmodules.openaimodel.UNetModel'>: Transform failed of <class 'torch.nn.modules.container.Sequential'>: Transform failed of <class 'torch.nn.modules.linear.Linear'>: Transform failed of <class 'torch.nn.parameter.Parameter'>: Cannot pack tensors on meta

################

Similar things happed when switching from SDXL to SD, Error log is a little different as follows:

ERROR [2024-08-14 08:19:58] /workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:44 - Exception in forward: e=NotImplementedError("Transform failed of <class 'ldm.modules.diffusionmodules.openaimodel.UNetModel'>: Transform failed of <class 'torch.nn.modules.container.Sequential'>: Transform failed of <class 'torch.nn.modules.linear.Linear'>: Transform failed of <class 'torch.nn.parameter.Parameter'>: Cannot pack tensors on meta") WARNING [2024-08-14 08:19:58] /workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:45 - Recompile oneflow module ... 0%| | 0/25 [00:00<?, ?it/s] Couldn't find VAE named ; using None instead *** API error: POST: http://127.0.0.1/sdapi/v1/txt2img {'error': 'NotImplementedError', 'detail': '', 'body': '', 'errors': "Transform failed of <class 'ldm.modules.diffusionmodules.openaimodel.UNetModel'>: Transform failed of <class 'torch.nn.modules.container.Sequential'>: Transform failed of <class 'torch.nn.modules.linear.Linear'>: Transform failed of <class 'torch.nn.parameter.Parameter'>: Cannot pack tensors on meta"} Traceback (most recent call last): File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 98, in receive return self.receive_nowait() File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 93, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 78, in call_next
    message = await recv_stream.receive()
  File "/workspace/venv/lib/python3.10/site-packages/anyio/streams/memory.py", line 118, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 186, in exception_handling
    return await call_next(request)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 84, in call_next
    raise app_exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 108, in __call__
    response = await self.dispatch_func(request, call_next)
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 150, in log_and_time
    res: Response = await call_next(req)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 84, in call_next
    raise app_exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/workspace/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "/workspace/venv/lib/python3.10/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/workspace/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/workspace/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/workspace/stable-diffusion-webui/modules/api/api.py", line 479, in text2imgapi
    processed = scripts.scripts_txt2img.run(p, *p.script_args) # Need to pass args as list here
  File "/workspace/stable-diffusion-webui/modules/scripts.py", line 780, in run
    processed = script.run(p, *script_args)
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/onediff_utils.py", line 147, in wrapper
    return func(
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/onediff_controlnet/compile.py", line 19, in wrapper
    return func(self, p, *arg, **kwargs)
  File "/workspace/stable-diffusion-webui/extensions/onediff_sd_webui_extensions/scripts/onediff.py", line 166, in run
    proc = process_images(p)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 847, in process_images
    res = process_images_inner(p)
  File "/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 988, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/workspace/stable-diffusion-webui/modules/processing.py", line 1346, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
    return func()
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 230, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/workspace/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 249, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 34, in __call__
    return self.__sub_func(self.__orig_func, *args, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_unet.py", line 50, in apply_model
    result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 36, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
    result = forward_call(*args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 48, in wrapper
    return func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/online_quantization_utils.py", line 65, in wrapper
    output = func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/graph_management_utils.py", line 123, in wrapper
    ret = func(self, *args, **kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/args_tree_util.py", line 70, in wrapper
    output = func(self, *mapped_args, **mapped_kwargs)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 152, in forward
    dpl_graph = self.get_graph()
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 118, in get_graph
    self._deployable_module_model.oneflow_module,
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/dual_module.py", line 30, in oneflow_module
    self._oneflow_module = torch2oflow(self._torch_module)
  File "/workspace/onediff/src/onediff/infer_compiler/backends/oneflow/transform/builtin_transform.py", line 43, in wrapper
    raise NotImplementedError(f"Transform failed of {type(first_param)}: {e}")
NotImplementedError: Transform failed of <class 'ldm.modules.diffusionmodules.openaimodel.UNetModel'>: Transform failed of <class 'torch.nn.modules.container.Sequential'>: Transform failed of <class 'torch.nn.modules.linear.Linear'>: Transform failed of <class 'torch.nn.parameter.Parameter'>: Cannot pack tensors on meta

################

Everything is OK when using webui by browser.

onediff: 1.2.0 oneflow: 0.9.1.dev20240802+cu118 webui: 1.10 version: v1.10.1 python: 3.10.12 torch: 2.1.2+cu118 xformers: 0.0.23.post1+cu118

frc99 commented 5 days ago

same problem.