2024-10-31 20:24:43,853 - root - INFO - Prompt executed in 15.69 seconds
2024-10-31 20:24:54,608 - root - INFO - got prompt
2024-10-31 20:24:55,519 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:24:55,520 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:24:55,524 - root - INFO - Prompt executed in 0.90 seconds
2024-10-31 20:26:05,687 - root - INFO - got prompt
2024-10-31 20:26:06,563 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:26:06,564 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:26:06,567 - root - INFO - Prompt executed in 0.87 seconds
2024-10-31 20:26:10,124 - root - INFO - got prompt
2024-10-31 20:26:11,018 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:26:11,019 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:26:11,023 - root - INFO - Prompt executed in 0.89 seconds
2024-10-31 20:26:25,154 - root - INFO - got prompt
2024-10-31 20:26:25,997 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:26:25,998 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:26:26,001 - root - INFO - Prompt executed in 0.84 seconds
2024-10-31 20:28:43,397 - root - INFO - got prompt
2024-10-31 20:28:46,967 - root - INFO - Requested to load FluxClipModel_
2024-10-31 20:28:46,968 - root - INFO - Loading 1 new model
2024-10-31 20:28:46,979 - root - INFO - loaded completely 0.0 9319.23095703125 True
2024-10-31 20:28:47,274 - root - WARNING - clip missing: ['text_projection.weight']
2024-10-31 20:28:52,189 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:28:52,190 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:28:52,197 - root - INFO - Prompt executed in 8.79 seconds
2024-10-31 20:35:32,764 - root - INFO - got prompt
2024-10-31 20:35:33,638 - root - ERROR - !!! Exception during processing !!!
2024-10-31 20:35:33,639 - root - ERROR - Traceback (most recent call last):
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate
joy_two_pipeline.parent.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels
self.pipeline.loadModels()
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels
self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init
assert (BASE_MODEL_PATH / "clip_model.pt").exists()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-10-31 20:35:33,641 - root - INFO - Prompt executed in 0.86 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":274,"last_link_id":445,"nodes":[{"id":75,"type":"SplitSigmasDenoise","pos":{"0":1988.4447021484375,"1":796.4603271484375},"size":{"0":230,"1":80},"flags":{},"order":18,"mode":0,"inputs":[{"name":"sigmas","type":"SIGMAS","link":145,"label":"Sigmas"}],"outputs":[{"name":"high_sigmas","type":"SIGMAS","links":[183],"slot_index":0,"shape":3,"label":"高Sigmas"},{"name":"low_sigmas","type":"SIGMAS","links":[189],"slot_index":1,"shape":3,"label":"低Sigmas"}],"properties":{"Node name for S&R":"SplitSigmasDenoise"},"widgets_values":[0.6]},{"id":90,"type":"DisableNoise","pos":{"0":2338.44482421875,"1":546.4602661132812},"size":{"0":210,"1":30},"flags":{"collapsed":true},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[190],"shape":3,"label":"噪波生成"}],"properties":{"Node name for S&R":"DisableNoise"},"widgets_values":[]},{"id":91,"type":"VAEDecode","pos":{"0":2323.42529296875,"1":662.09912109375},"size":{"0":210,"1":50},"flags":{},"order":25,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":232,"label":"Latent"},{"name":"vae","type":"VAE","link":281,"label":"VAE"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[201],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":45,"type":"PulidFluxModelLoader","pos":{"0":745.654052734375,"1":875.830810546875},"size":{"0":320,"1":60},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"PULIDFLUX","type":"PULIDFLUX","links":[125],"slot_index":0,"shape":3,"label":"PULIDFLUX"}],"properties":{"Node name for S&R":"PulidFluxModelLoader"},"widgets_values":["pulid_flux_v0.9.0.safetensors"]},{"id":16,"type":"KSamplerSelect","pos":{"0":1527.654052734375,"1":545.830810546875},"size":{"0":210,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[85,186],"slot_index":0,"shape":3,"label":"采样器"}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":25,"type":"RandomNoise","pos":{"0":1535.654052734375,"1":488.83056640625},"size":{"0":320,"1":82},"flags":{"collapsed":true},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[84],"slot_index":0,"shape":3,"label":"噪波生成"}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[410462816538941,"randomize"],"color":"#2a363b","bgcolor":"#3f5159"},{"id":26,"type":"FluxGuidance","pos":{"0":1536.654052734375,"1":935.830810546875},"size":{"0":320,"1":60},"flags":{"collapsed":true},"order":20,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":41,"label":"条件"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[107],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[3.5],"color":"#233","bgcolor":"#355"},{"id":47,"type":"BasicGuider","pos":{"0":1525.654052734375,"1":830.830810546875},"size":{"0":240,"1":50},"flags":{},"order":21,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":168,"label":"模型"},{"name":"conditioning","type":"CONDITIONING","link":107,"label":"条件"}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[83,185],"slot_index":0,"shape":3,"label":"引导"}],"properties":{"Node name for S&R":"BasicGuider"},"widgets_values":[]},{"id":89,"type":"SamplerCustomAdvanced","pos":{"0":2340.65380859375,"1":474.83056640625},"size":{"0":520,"1":510},"flags":{"collapsed":true},"order":24,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":190,"label":"噪波生成"},{"name":"guider","type":"GUIDER","link":185,"label":"引导"},{"name":"sampler","type":"SAMPLER","link":186,"label":"采样器"},{"name":"sigmas","type":"SIGMAS","link":189,"label":"Sigmas"},{"name":"latent_image","type":"LATENT","link":194,"label":"Latent"}],"outputs":[{"name":"output","type":"LATENT","links":[232],"slot_index":0,"shape":3,"label":"输出"},{"name":"denoised_output","type":"LATENT","links":[],"slot_index":1,"shape":3,"label":"降噪输出"}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":71,"type":"LayerUtility: CropByMask V2","pos":{"0":202,"1":403},"size":{"0":392.53765869140625,"1":481.29205322265625},"flags":{},"order":16,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":139,"label":"图像"},{"name":"mask","type":"MASK","link":235},{"name":"crop_box","type":"BOX","link":null,"label":"裁剪框","shape":7}],"outputs":[{"name":"croped_image","type":"IMAGE","links":[198],"slot_index":0,"shape":3,"label":"裁剪图像"},{"name":"croped_mask","type":"MASK","links":null,"shape":3,"label":"裁剪遮罩"},{"name":"crop_box","type":"BOX","links":null,"shape":3,"label":"裁剪框"},{"name":"box_preview","type":"IMAGE","links":null,"shape":3,"label":"裁剪框预览"}],"properties":{"Node name for S&R":"LayerUtility: CropByMask V2"},"widgets_values":[false,"mask_area",20,20,20,20,"8"],"color":"rgba(38, 73, 116, 0.7)"},{"id":48,"type":"SamplerCustomAdvanced","pos":{"0":1777.654052734375,"1":467.83056640625},"size":{"0":169.96287536621094,"1":147.89463806152344},"flags":{"collapsed":false},"order":22,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":84,"label":"噪波生成"},{"name":"guider","type":"GUIDER","link":83,"label":"引导"},{"name":"sampler","type":"SAMPLER","link":85,"label":"采样器"},{"name":"sigmas","type":"SIGMAS","link":183,"label":"Sigmas"},{"name":"latent_image","type":"LATENT","link":387,"label":"Latent"}],"outputs":[{"name":"output","type":"LATENT","links":[195],"slot_index":0,"shape":3,"label":"输出"},{"name":"denoised_output","type":"LATENT","links":[],"slot_index":1,"shape":3,"label":"降噪输出"}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":54,"type":"LoadImage","pos":{"0":318,"1":-581},"size":{"0":432.25189208984375,"1":397.7892761230469},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[139,234],"slot_index":0,"shape":3,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["ComfyUI_temp_tesov00001.png","image"]},{"id":119,"type":"LayerMask: PersonMaskUltra V2","pos":{"0":-158,"1":407},"size":{"0":344.6835021972656,"1":470.00762939453125},"flags":{},"order":13,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":234,"label":"图像"}],"outputs":[{"name":"image","type":"IMAGE","links":null,"shape":3,"label":"图像"},{"name":"mask","type":"MASK","links":[235],"slot_index":1,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LayerMask: PersonMaskUltra V2"},"widgets_values":[true,true,false,false,false,false,0.4,"VITMatte",6,6,0.01,0.99,true,"cuda",2],"color":"rgba(27, 80, 119, 0.7)"},{"id":53,"type":"PulidFluxInsightFaceLoader","pos":{"0":1111.65380859375,"1":554.8307495117188},"size":{"0":370,"1":60},"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"FACEANALYSIS","type":"FACEANALYSIS","links":[124],"slot_index":0,"shape":3,"label":"FACEANALYSIS"}],"properties":{"Node name for S&R":"PulidFluxInsightFaceLoader"},"widgets_values":["CUDA"]},{"id":17,"type":"BasicScheduler","pos":{"0":1531.654052734375,"1":652.830810546875},"size":{"0":210,"1":114.03009796142578},"flags":{"collapsed":false},"order":15,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":392,"slot_index":0,"label":"模型"}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[145],"slot_index":0,"shape":3,"label":"Sigmas"}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["ddim_uniform",30,1]},{"id":95,"type":"SaveImage","pos":{"0":801,"1":-629},"size":{"0":1632.8887939453125,"1":906.1729736328125},"flags":{},"order":26,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":201,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":239,"type":"EmptyLatentImage","pos":{"0":-167,"1":-86},"size":{"0":901.4378662109375,"1":109.62405395507812},"flags":{"collapsed":false},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[387],"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[896,1152,1]},{"id":93,"type":"InjectLatentNoise+","pos":{"0":1978.4447021484375,"1":586.4603881835938},"size":{"0":320,"1":150},"flags":{},"order":23,"mode":0,"inputs":[{"name":"latent","type":"LATENT","link":195,"label":"Latent"},{"name":"mask","type":"MASK","link":null,"label":"遮罩","shape":7}],"outputs":[{"name":"LATENT","type":"LATENT","links":[194],"slot_index":0,"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"InjectLatentNoise+"},"widgets_values":[181176520119247,"randomize",0.3,"true"]},{"id":10,"type":"VAELoader","pos":{"0":1989,"1":459},"size":{"0":310,"1":60},"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[281],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":67,"type":"LoadImage","pos":{"0":-156,"1":-585},"size":{"0":454.5463562011719,"1":398.754150390625},"flags":{},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[443],"slot_index":0,"shape":3,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["ComfyUI_temp_jhxrg00003.png","image"]},{"id":267,"type":"Joy_caption_two_load","pos":{"0":-125,"1":166},"size":{"0":315,"1":58},"flags":{"collapsed":false},"order":9,"mode":0,"inputs":[],"outputs":[{"name":"JoyTwoPipeline","type":"JoyTwoPipeline","links":[441],"slot_index":0,"label":"JoyTwoPipeline"}],"properties":{"Node name for S&R":"Joy_caption_two_load"},"widgets_values":["unsloth/Meta-Llama-3.1-8B-Instruct"]},{"id":51,"type":"PulidFluxEvaClipLoader","pos":{"0":1122,"1":464},"size":{"0":330,"1":30},"flags":{},"order":10,"mode":0,"inputs":[],"outputs":[{"name":"EVA_CLIP","type":"EVA_CLIP","links":[444],"slot_index":0,"shape":3,"label":"EVA_CLIP"}],"properties":{"Node name for S&R":"PulidFluxEvaClipLoader"},"widgets_values":[]},{"id":62,"type":"ApplyPulidFlux","pos":{"0":1139,"1":694},"size":{"0":320,"1":210},"flags":{},"order":19,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":391,"label":"model"},{"name":"pulid_flux","type":"PULIDFLUX","link":125,"label":"pulid_flux"},{"name":"eva_clip","type":"EVA_CLIP","link":444,"label":"eva_clip"},{"name":"face_analysis","type":"FACEANALYSIS","link":124,"label":"face_analysis"},{"name":"image","type":"IMAGE","link":198,"label":"image"},{"name":"attn_mask","type":"MASK","link":null,"label":"attn_mask","shape":7}],"outputs":[{"name":"MODEL","type":"MODEL","links":[168],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"ApplyPulidFlux"},"widgets_values":[1,0,0.98]},{"id":163,"type":"DualCLIPLoader","pos":{"0":776,"1":705},"size":{"0":315,"1":106},"flags":{},"order":11,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[439],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","t5xxl_fp16.safetensors","flux"]},{"id":94,"type":"UNETLoader","pos":{"0":779.654052734375,"1":567.830810546875},"size":{"0":320,"1":82},"flags":{},"order":12,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[391,392],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev.safetensors","default"]},{"id":6,"type":"CLIPTextEncode","pos":{"0":2312,"1":825},"size":{"0":210,"1":60},"flags":{"collapsed":false},"order":17,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":439,"label":"CLIP"},{"name":"text","type":"STRING","link":445,"slot_index":1,"widget":{"name":"text"},"label":"文本"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[41],"slot_index":0,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["Half body portrait of 60 years old guy, with an surprised expression, he is lost in vectors of AI models, sourounded by PC monitors and many cables, on his tshirt is a text with words printed in Arial font:\"PuLID Flux\", detailed, glowy background, photorealistic style with skin inperfections, looks like shot with an smartphone, skin details without plastic look, ASUS Keyboard."],"color":"#232","bgcolor":"#353"},{"id":274,"type":"Joy_caption_two","pos":{"0":390,"1":150},"size":{"0":315,"1":126},"flags":{},"order":14,"mode":0,"inputs":[{"name":"joy_two_pipeline","type":"JoyTwoPipeline","link":441,"label":"joy_two_pipeline"},{"name":"image","type":"IMAGE","link":443,"label":"image"}],"outputs":[{"name":"STRING","type":"STRING","links":[445],"slot_index":0,"label":"STRING"}],"properties":{"Node name for S&R":"Joy_caption_two"},"widgets_values":["Descriptive (Informal)","long",false]}],"links":[[41,6,0,26,0,"CONDITIONING"],[83,47,0,48,1,"GUIDER"],[84,25,0,48,0,"NOISE"],[85,16,0,48,2,"SAMPLER"],[107,26,0,47,1,"CONDITIONING"],[124,53,0,62,3,"FACEANALYSIS"],[125,45,0,62,1,"PULIDFLUX"],[139,54,0,71,0,"IMAGE"],[145,17,0,75,0,"SIGMAS"],[168,62,0,47,0,"MODEL"],[183,75,0,48,3,"SIGMAS"],[185,47,0,89,1,"GUIDER"],[186,16,0,89,2,"SAMPLER"],[189,75,1,89,3,"SIGMAS"],[190,90,0,89,0,"NOISE"],[194,93,0,89,4,"LATENT"],[195,48,0,93,0,"LATENT"],[198,71,0,62,4,"IMAGE"],[201,91,0,95,0,"IMAGE"],[232,89,0,91,0,"LATENT"],[234,54,0,119,0,"IMAGE"],[235,119,1,71,1,"MASK"],[281,10,0,91,1,"VAE"],[387,239,0,48,4,"LATENT"],[391,94,0,62,0,"MODEL"],[392,94,0,17,0,"MODEL"],[439,163,0,6,0,"CLIP"],[441,267,0,274,0,"JoyTwoPipeline"],[443,67,0,274,1,"IMAGE"],[444,51,0,62,2,"EVA_CLIP"],[445,274,0,6,1,"STRING"]],"groups":[{"title":"如果第一次在线生图出现提示:显存不足,重新在线生图一下即可成功","bounding":[-173,-795,4771,98],"color":"#88A","font_size":60,"flags":{}},{"title":"JOY提示词反推—自己部署的话看我主页部署教程","bounding":[-178,60,940,261],"color":"#3f789e","font_size":22,"flags":{}},{"title":"图像处理","bounding":[-174,331,780,557],"color":"#3f789e","font_size":24,"flags":{}},{"title":"工作区域","bounding":[724,393,1836,557],"color":"#3f789e","font_size":24,"flags":{}},{"title":"上传— 图片参考","bounding":[-172,-689,470,517],"color":"#3f789e","font_size":40,"flags":{}},{"title":"宽高比设置","bounding":[-174,-165,938,219],"color":"#3f789e","font_size":24,"flags":{}},{"title":"海报生成","bounding":[768,-688,1679,1011],"color":"#3f789e","font_size":40,"flags":{}},{"title":"上传— 脸部照片","bounding":[305,-689,459,517],"color":"#3f789e","font_size":40,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.6115909044841543,"offset":[618.3272332607767,579.544696718227]},"workspaceinfo":{"id":"ulqKMamHmtTjVdROR-Pe"},"0246.VERSION":[0,0,4]},"version":0.4}
## Additional Context
(Please add any additional context or steps to reproduce the error here)
ComfyUI Error Report
Error Details
Exception Message:
Stack Trace
2024-10-31 20:24:14,347 - root - INFO - Total VRAM 24564 MB, total RAM 65349 MB 2024-10-31 20:24:14,347 - root - INFO - pytorch version: 2.5.0+cu124 2024-10-31 20:24:14,348 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-31 20:24:14,348 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync 2024-10-31 20:24:15,267 - root - INFO - Using pytorch cross attention 2024-10-31 20:24:16,315 - root - INFO - [Prompt Server] web root: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\web 2024-10-31 20:24:16,820 - root - INFO - Total VRAM 24564 MB, total RAM 65349 MB 2024-10-31 20:24:16,820 - root - INFO - pytorch version: 2.5.0+cu124 2024-10-31 20:24:16,821 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-31 20:24:16,821 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync 2024-10-31 20:24:20,238 - root - INFO - Import times for custom nodes: 2024-10-31 20:24:20,238 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-31 20:24:20,238 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main 2024-10-31 20:24:20,238 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main 2024-10-31 20:24:20,239 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger-main 2024-10-31 20:24:20,239 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_HF_Servelress_Inference 2024-10-31 20:24:20,239 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 2024-10-31 20:24:20,239 - root - INFO - 0.0 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials 2024-10-31 20:24:20,239 - root - INFO - 0.1 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes 2024-10-31 20:24:20,240 - root - INFO - 0.2 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager-main 2024-10-31 20:24:20,240 - root - INFO - 0.2 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper 2024-10-31 20:24:20,240 - root - INFO - 0.5 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle 2024-10-31 20:24:20,240 - root - INFO - 0.5 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR 2024-10-31 20:24:20,240 - root - INFO - 1.9 seconds: D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-master 2024-10-31 20:24:20,241 - root - INFO - 2024-10-31 20:24:20,246 - root - INFO - Starting server
2024-10-31 20:24:20,247 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-31 20:24:28,151 - root - INFO - got prompt 2024-10-31 20:24:28,203 - root - INFO - Using pytorch attention in VAE 2024-10-31 20:24:28,204 - root - INFO - Using pytorch attention in VAE 2024-10-31 20:24:28,638 - root - INFO - model weight dtype torch.bfloat16, manual cast: None 2024-10-31 20:24:28,639 - root - INFO - modeltype FLUX 2024-10-31 20:24:35,265 - root - INFO - Requested to load FluxClipModel 2024-10-31 20:24:35,266 - root - INFO - Loading 1 new model 2024-10-31 20:24:35,277 - root - INFO - loaded completely 0.0 9319.23095703125 True 2024-10-31 20:24:35,336 - root - WARNING - clip missing: ['encoder.block.0.layer.0.SelfAttention.q.weight', 'encoder.block.0.layer.0.SelfAttention.k.weight', 'encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.0.layer.0.SelfAttention.o.weight', 'encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'encoder.block.0.layer.0.layer_norm.weight', 'encoder.block.0.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.0.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.0.layer.1.DenseReluDense.wo.weight', 'encoder.block.0.layer.1.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.q.weight', 'encoder.block.1.layer.0.SelfAttention.k.weight', 'encoder.block.1.layer.0.SelfAttention.v.weight', 'encoder.block.1.layer.0.SelfAttention.o.weight', 'encoder.block.1.layer.0.layer_norm.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.1.layer.1.DenseReluDense.wo.weight', 'encoder.block.1.layer.1.layer_norm.weight', 'encoder.block.2.layer.0.SelfAttention.q.weight', 'encoder.block.2.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.o.weight', 'encoder.block.2.layer.0.layer_norm.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.2.layer.1.DenseReluDense.wo.weight', 'encoder.block.2.layer.1.layer_norm.weight', 'encoder.block.3.layer.0.SelfAttention.q.weight', 'encoder.block.3.layer.0.SelfAttention.k.weight', 'encoder.block.3.layer.0.SelfAttention.v.weight', 'encoder.block.3.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.layer_norm.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.3.layer.1.DenseReluDense.wo.weight', 'encoder.block.3.layer.1.layer_norm.weight', 'encoder.block.4.layer.0.SelfAttention.q.weight', 'encoder.block.4.layer.0.SelfAttention.k.weight', 'encoder.block.4.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.SelfAttention.o.weight', 'encoder.block.4.layer.0.layer_norm.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.4.layer.1.DenseReluDense.wo.weight', 'encoder.block.4.layer.1.layer_norm.weight', 'encoder.block.5.layer.0.SelfAttention.q.weight', 'encoder.block.5.layer.0.SelfAttention.k.weight', 'encoder.block.5.layer.0.SelfAttention.v.weight', 'encoder.block.5.layer.0.SelfAttention.o.weight', 'encoder.block.5.layer.0.layer_norm.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.5.layer.1.DenseReluDense.wo.weight', 'encoder.block.5.layer.1.layer_norm.weight', 'encoder.block.6.layer.0.SelfAttention.q.weight', 'encoder.block.6.layer.0.SelfAttention.k.weight', 'encoder.block.6.layer.0.SelfAttention.v.weight', 'encoder.block.6.layer.0.SelfAttention.o.weight', 'encoder.block.6.layer.0.layer_norm.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.6.layer.1.DenseReluDense.wo.weight', 'encoder.block.6.layer.1.layer_norm.weight', 'encoder.block.7.layer.0.SelfAttention.q.weight', 'encoder.block.7.layer.0.SelfAttention.k.weight', 'encoder.block.7.layer.0.SelfAttention.v.weight', 'encoder.block.7.layer.0.SelfAttention.o.weight', 'encoder.block.7.layer.0.layer_norm.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.7.layer.1.DenseReluDense.wo.weight', 'encoder.block.7.layer.1.layer_norm.weight', 'encoder.block.8.layer.0.SelfAttention.q.weight', 'encoder.block.8.layer.0.SelfAttention.k.weight', 'encoder.block.8.layer.0.SelfAttention.v.weight', 'encoder.block.8.layer.0.SelfAttention.o.weight', 'encoder.block.8.layer.0.layer_norm.weight', 'encoder.block.8.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.8.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.8.layer.1.DenseReluDense.wo.weight', 'encoder.block.8.layer.1.layer_norm.weight', 'encoder.block.9.layer.0.SelfAttention.q.weight', 'encoder.block.9.layer.0.SelfAttention.k.weight', 'encoder.block.9.layer.0.SelfAttention.v.weight', 'encoder.block.9.layer.0.SelfAttention.o.weight', 'encoder.block.9.layer.0.layer_norm.weight', 'encoder.block.9.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.9.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.9.layer.1.DenseReluDense.wo.weight', 'encoder.block.9.layer.1.layer_norm.weight', 'encoder.block.10.layer.0.SelfAttention.q.weight', 'encoder.block.10.layer.0.SelfAttention.k.weight', 'encoder.block.10.layer.0.SelfAttention.v.weight', 'encoder.block.10.layer.0.SelfAttention.o.weight', 'encoder.block.10.layer.0.layer_norm.weight', 'encoder.block.10.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.10.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.10.layer.1.DenseReluDense.wo.weight', 'encoder.block.10.layer.1.layer_norm.weight', 'encoder.block.11.layer.0.SelfAttention.q.weight', 'encoder.block.11.layer.0.SelfAttention.k.weight', 'encoder.block.11.layer.0.SelfAttention.v.weight', 'encoder.block.11.layer.0.SelfAttention.o.weight', 'encoder.block.11.layer.0.layer_norm.weight', 'encoder.block.11.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.11.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.11.layer.1.DenseReluDense.wo.weight', 'encoder.block.11.layer.1.layer_norm.weight', 'encoder.block.12.layer.0.SelfAttention.q.weight', 'encoder.block.12.layer.0.SelfAttention.k.weight', 'encoder.block.12.layer.0.SelfAttention.v.weight', 'encoder.block.12.layer.0.SelfAttention.o.weight', 'encoder.block.12.layer.0.layer_norm.weight', 'encoder.block.12.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.12.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.12.layer.1.DenseReluDense.wo.weight', 'encoder.block.12.layer.1.layer_norm.weight', 'encoder.block.13.layer.0.SelfAttention.q.weight', 'encoder.block.13.layer.0.SelfAttention.k.weight', 'encoder.block.13.layer.0.SelfAttention.v.weight', 'encoder.block.13.layer.0.SelfAttention.o.weight', 'encoder.block.13.layer.0.layer_norm.weight', 'encoder.block.13.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.13.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.13.layer.1.DenseReluDense.wo.weight', 'encoder.block.13.layer.1.layer_norm.weight', 'encoder.block.14.layer.0.SelfAttention.q.weight', 'encoder.block.14.layer.0.SelfAttention.k.weight', 'encoder.block.14.layer.0.SelfAttention.v.weight', 'encoder.block.14.layer.0.SelfAttention.o.weight', 'encoder.block.14.layer.0.layer_norm.weight', 'encoder.block.14.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.14.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.14.layer.1.DenseReluDense.wo.weight', 'encoder.block.14.layer.1.layer_norm.weight', 'encoder.block.15.layer.0.SelfAttention.q.weight', 'encoder.block.15.layer.0.SelfAttention.k.weight', 'encoder.block.15.layer.0.SelfAttention.v.weight', 'encoder.block.15.layer.0.SelfAttention.o.weight', 'encoder.block.15.layer.0.layer_norm.weight', 'encoder.block.15.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.15.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.15.layer.1.DenseReluDense.wo.weight', 'encoder.block.15.layer.1.layer_norm.weight', 'encoder.block.16.layer.0.SelfAttention.q.weight', 'encoder.block.16.layer.0.SelfAttention.k.weight', 'encoder.block.16.layer.0.SelfAttention.v.weight', 'encoder.block.16.layer.0.SelfAttention.o.weight', 'encoder.block.16.layer.0.layer_norm.weight', 'encoder.block.16.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.16.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.16.layer.1.DenseReluDense.wo.weight', 'encoder.block.16.layer.1.layer_norm.weight', 'encoder.block.17.layer.0.SelfAttention.q.weight', 'encoder.block.17.layer.0.SelfAttention.k.weight', 'encoder.block.17.layer.0.SelfAttention.v.weight', 'encoder.block.17.layer.0.SelfAttention.o.weight', 'encoder.block.17.layer.0.layer_norm.weight', 'encoder.block.17.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.17.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.17.layer.1.DenseReluDense.wo.weight', 'encoder.block.17.layer.1.layer_norm.weight', 'encoder.block.18.layer.0.SelfAttention.q.weight', 'encoder.block.18.layer.0.SelfAttention.k.weight', 'encoder.block.18.layer.0.SelfAttention.v.weight', 'encoder.block.18.layer.0.SelfAttention.o.weight', 'encoder.block.18.layer.0.layer_norm.weight', 'encoder.block.18.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.18.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.18.layer.1.DenseReluDense.wo.weight', 'encoder.block.18.layer.1.layer_norm.weight', 'encoder.block.19.layer.0.SelfAttention.q.weight', 'encoder.block.19.layer.0.SelfAttention.k.weight', 'encoder.block.19.layer.0.SelfAttention.v.weight', 'encoder.block.19.layer.0.SelfAttention.o.weight', 'encoder.block.19.layer.0.layer_norm.weight', 'encoder.block.19.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.19.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.19.layer.1.DenseReluDense.wo.weight', 'encoder.block.19.layer.1.layer_norm.weight', 'encoder.block.20.layer.0.SelfAttention.q.weight', 'encoder.block.20.layer.0.SelfAttention.k.weight', 'encoder.block.20.layer.0.SelfAttention.v.weight', 'encoder.block.20.layer.0.SelfAttention.o.weight', 'encoder.block.20.layer.0.layer_norm.weight', 'encoder.block.20.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.20.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.20.layer.1.DenseReluDense.wo.weight', 'encoder.block.20.layer.1.layer_norm.weight', 'encoder.block.21.layer.0.SelfAttention.q.weight', 'encoder.block.21.layer.0.SelfAttention.k.weight', 'encoder.block.21.layer.0.SelfAttention.v.weight', 'encoder.block.21.layer.0.SelfAttention.o.weight', 'encoder.block.21.layer.0.layer_norm.weight', 'encoder.block.21.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.21.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.21.layer.1.DenseReluDense.wo.weight', 'encoder.block.21.layer.1.layer_norm.weight', 'encoder.block.22.layer.0.SelfAttention.q.weight', 'encoder.block.22.layer.0.SelfAttention.k.weight', 'encoder.block.22.layer.0.SelfAttention.v.weight', 'encoder.block.22.layer.0.SelfAttention.o.weight', 'encoder.block.22.layer.0.layer_norm.weight', 'encoder.block.22.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.22.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.22.layer.1.DenseReluDense.wo.weight', 'encoder.block.22.layer.1.layer_norm.weight', 'encoder.block.23.layer.0.SelfAttention.q.weight', 'encoder.block.23.layer.0.SelfAttention.k.weight', 'encoder.block.23.layer.0.SelfAttention.v.weight', 'encoder.block.23.layer.0.SelfAttention.o.weight', 'encoder.block.23.layer.0.layer_norm.weight', 'encoder.block.23.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.23.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.23.layer.1.DenseReluDense.wo.weight', 'encoder.block.23.layer.1.layer_norm.weight', 'encoder.final_layer_norm.weight', 'shared.weight'] 2024-10-31 20:24:43,848 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:24:43,851 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 268, in generate self.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:24:43,853 - root - INFO - Prompt executed in 15.69 seconds 2024-10-31 20:24:54,608 - root - INFO - got prompt 2024-10-31 20:24:55,519 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:24:55,520 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:24:55,524 - root - INFO - Prompt executed in 0.90 seconds 2024-10-31 20:26:05,687 - root - INFO - got prompt 2024-10-31 20:26:06,563 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:26:06,564 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:26:06,567 - root - INFO - Prompt executed in 0.87 seconds 2024-10-31 20:26:10,124 - root - INFO - got prompt 2024-10-31 20:26:11,018 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:26:11,019 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:26:11,023 - root - INFO - Prompt executed in 0.89 seconds 2024-10-31 20:26:25,154 - root - INFO - got prompt 2024-10-31 20:26:25,997 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:26:25,998 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:26:26,001 - root - INFO - Prompt executed in 0.84 seconds 2024-10-31 20:28:43,397 - root - INFO - got prompt 2024-10-31 20:28:46,967 - root - INFO - Requested to load FluxClipModel_ 2024-10-31 20:28:46,968 - root - INFO - Loading 1 new model 2024-10-31 20:28:46,979 - root - INFO - loaded completely 0.0 9319.23095703125 True 2024-10-31 20:28:47,274 - root - WARNING - clip missing: ['text_projection.weight'] 2024-10-31 20:28:52,189 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:28:52,190 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:28:52,197 - root - INFO - Prompt executed in 8.79 seconds 2024-10-31 20:35:32,764 - root - INFO - got prompt 2024-10-31 20:35:33,638 - root - ERROR - !!! Exception during processing !!! 2024-10-31 20:35:33,639 - root - ERROR - Traceback (most recent call last): File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 300, in generate joy_two_pipeline.parent.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 263, in loadModels self.pipeline.loadModels() File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 229, in loadModels self.clip_model = JoyClipVisionModel(self.load_device, self.offload_device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Ai\换脸\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two-main\joy_caption_two_node.py", line 51, in init assert (BASE_MODEL_PATH / "clip_model.pt").exists() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError
2024-10-31 20:35:33,641 - root - INFO - Prompt executed in 0.86 seconds
{"last_node_id":274,"last_link_id":445,"nodes":[{"id":75,"type":"SplitSigmasDenoise","pos":{"0":1988.4447021484375,"1":796.4603271484375},"size":{"0":230,"1":80},"flags":{},"order":18,"mode":0,"inputs":[{"name":"sigmas","type":"SIGMAS","link":145,"label":"Sigmas"}],"outputs":[{"name":"high_sigmas","type":"SIGMAS","links":[183],"slot_index":0,"shape":3,"label":"高Sigmas"},{"name":"low_sigmas","type":"SIGMAS","links":[189],"slot_index":1,"shape":3,"label":"低Sigmas"}],"properties":{"Node name for S&R":"SplitSigmasDenoise"},"widgets_values":[0.6]},{"id":90,"type":"DisableNoise","pos":{"0":2338.44482421875,"1":546.4602661132812},"size":{"0":210,"1":30},"flags":{"collapsed":true},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[190],"shape":3,"label":"噪波生成"}],"properties":{"Node name for S&R":"DisableNoise"},"widgets_values":[]},{"id":91,"type":"VAEDecode","pos":{"0":2323.42529296875,"1":662.09912109375},"size":{"0":210,"1":50},"flags":{},"order":25,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":232,"label":"Latent"},{"name":"vae","type":"VAE","link":281,"label":"VAE"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[201],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":45,"type":"PulidFluxModelLoader","pos":{"0":745.654052734375,"1":875.830810546875},"size":{"0":320,"1":60},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"PULIDFLUX","type":"PULIDFLUX","links":[125],"slot_index":0,"shape":3,"label":"PULIDFLUX"}],"properties":{"Node name for S&R":"PulidFluxModelLoader"},"widgets_values":["pulid_flux_v0.9.0.safetensors"]},{"id":16,"type":"KSamplerSelect","pos":{"0":1527.654052734375,"1":545.830810546875},"size":{"0":210,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[85,186],"slot_index":0,"shape":3,"label":"采样器"}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":25,"type":"RandomNoise","pos":{"0":1535.654052734375,"1":488.83056640625},"size":{"0":320,"1":82},"flags":{"collapsed":true},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[84],"slot_index":0,"shape":3,"label":"噪波生成"}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[410462816538941,"randomize"],"color":"#2a363b","bgcolor":"#3f5159"},{"id":26,"type":"FluxGuidance","pos":{"0":1536.654052734375,"1":935.830810546875},"size":{"0":320,"1":60},"flags":{"collapsed":true},"order":20,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":41,"label":"条件"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[107],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[3.5],"color":"#233","bgcolor":"#355"},{"id":47,"type":"BasicGuider","pos":{"0":1525.654052734375,"1":830.830810546875},"size":{"0":240,"1":50},"flags":{},"order":21,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":168,"label":"模型"},{"name":"conditioning","type":"CONDITIONING","link":107,"label":"条件"}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[83,185],"slot_index":0,"shape":3,"label":"引导"}],"properties":{"Node name for S&R":"BasicGuider"},"widgets_values":[]},{"id":89,"type":"SamplerCustomAdvanced","pos":{"0":2340.65380859375,"1":474.83056640625},"size":{"0":520,"1":510},"flags":{"collapsed":true},"order":24,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":190,"label":"噪波生成"},{"name":"guider","type":"GUIDER","link":185,"label":"引导"},{"name":"sampler","type":"SAMPLER","link":186,"label":"采样器"},{"name":"sigmas","type":"SIGMAS","link":189,"label":"Sigmas"},{"name":"latent_image","type":"LATENT","link":194,"label":"Latent"}],"outputs":[{"name":"output","type":"LATENT","links":[232],"slot_index":0,"shape":3,"label":"输出"},{"name":"denoised_output","type":"LATENT","links":[],"slot_index":1,"shape":3,"label":"降噪输出"}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":71,"type":"LayerUtility: CropByMask V2","pos":{"0":202,"1":403},"size":{"0":392.53765869140625,"1":481.29205322265625},"flags":{},"order":16,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":139,"label":"图像"},{"name":"mask","type":"MASK","link":235},{"name":"crop_box","type":"BOX","link":null,"label":"裁剪框","shape":7}],"outputs":[{"name":"croped_image","type":"IMAGE","links":[198],"slot_index":0,"shape":3,"label":"裁剪图像"},{"name":"croped_mask","type":"MASK","links":null,"shape":3,"label":"裁剪遮罩"},{"name":"crop_box","type":"BOX","links":null,"shape":3,"label":"裁剪框"},{"name":"box_preview","type":"IMAGE","links":null,"shape":3,"label":"裁剪框预览"}],"properties":{"Node name for S&R":"LayerUtility: CropByMask V2"},"widgets_values":[false,"mask_area",20,20,20,20,"8"],"color":"rgba(38, 73, 116, 0.7)"},{"id":48,"type":"SamplerCustomAdvanced","pos":{"0":1777.654052734375,"1":467.83056640625},"size":{"0":169.96287536621094,"1":147.89463806152344},"flags":{"collapsed":false},"order":22,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":84,"label":"噪波生成"},{"name":"guider","type":"GUIDER","link":83,"label":"引导"},{"name":"sampler","type":"SAMPLER","link":85,"label":"采样器"},{"name":"sigmas","type":"SIGMAS","link":183,"label":"Sigmas"},{"name":"latent_image","type":"LATENT","link":387,"label":"Latent"}],"outputs":[{"name":"output","type":"LATENT","links":[195],"slot_index":0,"shape":3,"label":"输出"},{"name":"denoised_output","type":"LATENT","links":[],"slot_index":1,"shape":3,"label":"降噪输出"}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":54,"type":"LoadImage","pos":{"0":318,"1":-581},"size":{"0":432.25189208984375,"1":397.7892761230469},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[139,234],"slot_index":0,"shape":3,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["ComfyUI_temp_tesov00001.png","image"]},{"id":119,"type":"LayerMask: PersonMaskUltra V2","pos":{"0":-158,"1":407},"size":{"0":344.6835021972656,"1":470.00762939453125},"flags":{},"order":13,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":234,"label":"图像"}],"outputs":[{"name":"image","type":"IMAGE","links":null,"shape":3,"label":"图像"},{"name":"mask","type":"MASK","links":[235],"slot_index":1,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LayerMask: PersonMaskUltra V2"},"widgets_values":[true,true,false,false,false,false,0.4,"VITMatte",6,6,0.01,0.99,true,"cuda",2],"color":"rgba(27, 80, 119, 0.7)"},{"id":53,"type":"PulidFluxInsightFaceLoader","pos":{"0":1111.65380859375,"1":554.8307495117188},"size":{"0":370,"1":60},"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"FACEANALYSIS","type":"FACEANALYSIS","links":[124],"slot_index":0,"shape":3,"label":"FACEANALYSIS"}],"properties":{"Node name for S&R":"PulidFluxInsightFaceLoader"},"widgets_values":["CUDA"]},{"id":17,"type":"BasicScheduler","pos":{"0":1531.654052734375,"1":652.830810546875},"size":{"0":210,"1":114.03009796142578},"flags":{"collapsed":false},"order":15,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":392,"slot_index":0,"label":"模型"}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[145],"slot_index":0,"shape":3,"label":"Sigmas"}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["ddim_uniform",30,1]},{"id":95,"type":"SaveImage","pos":{"0":801,"1":-629},"size":{"0":1632.8887939453125,"1":906.1729736328125},"flags":{},"order":26,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":201,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":239,"type":"EmptyLatentImage","pos":{"0":-167,"1":-86},"size":{"0":901.4378662109375,"1":109.62405395507812},"flags":{"collapsed":false},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[387],"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[896,1152,1]},{"id":93,"type":"InjectLatentNoise+","pos":{"0":1978.4447021484375,"1":586.4603881835938},"size":{"0":320,"1":150},"flags":{},"order":23,"mode":0,"inputs":[{"name":"latent","type":"LATENT","link":195,"label":"Latent"},{"name":"mask","type":"MASK","link":null,"label":"遮罩","shape":7}],"outputs":[{"name":"LATENT","type":"LATENT","links":[194],"slot_index":0,"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"InjectLatentNoise+"},"widgets_values":[181176520119247,"randomize",0.3,"true"]},{"id":10,"type":"VAELoader","pos":{"0":1989,"1":459},"size":{"0":310,"1":60},"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[281],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":67,"type":"LoadImage","pos":{"0":-156,"1":-585},"size":{"0":454.5463562011719,"1":398.754150390625},"flags":{},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[443],"slot_index":0,"shape":3,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["ComfyUI_temp_jhxrg00003.png","image"]},{"id":267,"type":"Joy_caption_two_load","pos":{"0":-125,"1":166},"size":{"0":315,"1":58},"flags":{"collapsed":false},"order":9,"mode":0,"inputs":[],"outputs":[{"name":"JoyTwoPipeline","type":"JoyTwoPipeline","links":[441],"slot_index":0,"label":"JoyTwoPipeline"}],"properties":{"Node name for S&R":"Joy_caption_two_load"},"widgets_values":["unsloth/Meta-Llama-3.1-8B-Instruct"]},{"id":51,"type":"PulidFluxEvaClipLoader","pos":{"0":1122,"1":464},"size":{"0":330,"1":30},"flags":{},"order":10,"mode":0,"inputs":[],"outputs":[{"name":"EVA_CLIP","type":"EVA_CLIP","links":[444],"slot_index":0,"shape":3,"label":"EVA_CLIP"}],"properties":{"Node name for S&R":"PulidFluxEvaClipLoader"},"widgets_values":[]},{"id":62,"type":"ApplyPulidFlux","pos":{"0":1139,"1":694},"size":{"0":320,"1":210},"flags":{},"order":19,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":391,"label":"model"},{"name":"pulid_flux","type":"PULIDFLUX","link":125,"label":"pulid_flux"},{"name":"eva_clip","type":"EVA_CLIP","link":444,"label":"eva_clip"},{"name":"face_analysis","type":"FACEANALYSIS","link":124,"label":"face_analysis"},{"name":"image","type":"IMAGE","link":198,"label":"image"},{"name":"attn_mask","type":"MASK","link":null,"label":"attn_mask","shape":7}],"outputs":[{"name":"MODEL","type":"MODEL","links":[168],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"ApplyPulidFlux"},"widgets_values":[1,0,0.98]},{"id":163,"type":"DualCLIPLoader","pos":{"0":776,"1":705},"size":{"0":315,"1":106},"flags":{},"order":11,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[439],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","t5xxl_fp16.safetensors","flux"]},{"id":94,"type":"UNETLoader","pos":{"0":779.654052734375,"1":567.830810546875},"size":{"0":320,"1":82},"flags":{},"order":12,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[391,392],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev.safetensors","default"]},{"id":6,"type":"CLIPTextEncode","pos":{"0":2312,"1":825},"size":{"0":210,"1":60},"flags":{"collapsed":false},"order":17,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":439,"label":"CLIP"},{"name":"text","type":"STRING","link":445,"slot_index":1,"widget":{"name":"text"},"label":"文本"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[41],"slot_index":0,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["Half body portrait of 60 years old guy, with an surprised expression, he is lost in vectors of AI models, sourounded by PC monitors and many cables, on his tshirt is a text with words printed in Arial font:\"PuLID Flux\", detailed, glowy background, photorealistic style with skin inperfections, looks like shot with an smartphone, skin details without plastic look, ASUS Keyboard."],"color":"#232","bgcolor":"#353"},{"id":274,"type":"Joy_caption_two","pos":{"0":390,"1":150},"size":{"0":315,"1":126},"flags":{},"order":14,"mode":0,"inputs":[{"name":"joy_two_pipeline","type":"JoyTwoPipeline","link":441,"label":"joy_two_pipeline"},{"name":"image","type":"IMAGE","link":443,"label":"image"}],"outputs":[{"name":"STRING","type":"STRING","links":[445],"slot_index":0,"label":"STRING"}],"properties":{"Node name for S&R":"Joy_caption_two"},"widgets_values":["Descriptive (Informal)","long",false]}],"links":[[41,6,0,26,0,"CONDITIONING"],[83,47,0,48,1,"GUIDER"],[84,25,0,48,0,"NOISE"],[85,16,0,48,2,"SAMPLER"],[107,26,0,47,1,"CONDITIONING"],[124,53,0,62,3,"FACEANALYSIS"],[125,45,0,62,1,"PULIDFLUX"],[139,54,0,71,0,"IMAGE"],[145,17,0,75,0,"SIGMAS"],[168,62,0,47,0,"MODEL"],[183,75,0,48,3,"SIGMAS"],[185,47,0,89,1,"GUIDER"],[186,16,0,89,2,"SAMPLER"],[189,75,1,89,3,"SIGMAS"],[190,90,0,89,0,"NOISE"],[194,93,0,89,4,"LATENT"],[195,48,0,93,0,"LATENT"],[198,71,0,62,4,"IMAGE"],[201,91,0,95,0,"IMAGE"],[232,89,0,91,0,"LATENT"],[234,54,0,119,0,"IMAGE"],[235,119,1,71,1,"MASK"],[281,10,0,91,1,"VAE"],[387,239,0,48,4,"LATENT"],[391,94,0,62,0,"MODEL"],[392,94,0,17,0,"MODEL"],[439,163,0,6,0,"CLIP"],[441,267,0,274,0,"JoyTwoPipeline"],[443,67,0,274,1,"IMAGE"],[444,51,0,62,2,"EVA_CLIP"],[445,274,0,6,1,"STRING"]],"groups":[{"title":"如果第一次在线生图出现提示:显存不足,重新在线生图一下即可成功","bounding":[-173,-795,4771,98],"color":"#88A","font_size":60,"flags":{}},{"title":"JOY提示词反推—自己部署的话看我主页部署教程","bounding":[-178,60,940,261],"color":"#3f789e","font_size":22,"flags":{}},{"title":"图像处理","bounding":[-174,331,780,557],"color":"#3f789e","font_size":24,"flags":{}},{"title":"工作区域","bounding":[724,393,1836,557],"color":"#3f789e","font_size":24,"flags":{}},{"title":"上传— 图片参考","bounding":[-172,-689,470,517],"color":"#3f789e","font_size":40,"flags":{}},{"title":"宽高比设置","bounding":[-174,-165,938,219],"color":"#3f789e","font_size":24,"flags":{}},{"title":"海报生成","bounding":[768,-688,1679,1011],"color":"#3f789e","font_size":40,"flags":{}},{"title":"上传— 脸部照片","bounding":[305,-689,459,517],"color":"#3f789e","font_size":40,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.6115909044841543,"offset":[618.3272332607767,579.544696718227]},"workspaceinfo":{"id":"ulqKMamHmtTjVdROR-Pe"},"0246.VERSION":[0,0,4]},"version":0.4}