Closed vanche1212 closed 2 months ago
diffusers==0.30.1
The prompt is probably just too long.
The prompt is probably just too long.
tks, It is really too long. Is there any way to use long prompt words?
Here are my hint words,1047 characters only
In the blue underwater world, an elegant turtle leisurely shuttles between coral reefs. The turtle's shell is covered with exquisite textures, and the sun shines through the water with mottled light and shadow, making its shell shine with a mysterious luster. The turtle swings its flippers slowly, light and graceful, as if soaring through the water. The underwater world around is colorful, and corals of different shapes are like sculptures of nature, decorating this peaceful ocean paradise. Colorful tropical fish swam in groups among the corals. Some fish stopped beside the turtle as if to say hello to it. The whole scene was full of vigor and vitality. In the distance, the dark blue water gradually became dark and mysterious, as if hiding countless undiscovered secrets. And the turtle is so calm and comfortable in this vast underwater world, as if it is the guardian of this ocean. The whole picture shows a quiet and harmonious natural beauty with soft colors and smooth lines, showing the fantastic world in the depths of the ocean.
226 tokens ,I found
226 tokens ,I found
Yes, I think it's best to stay within that limit. I did modify the code a bit to allow it to run with the longer prompt too, but the results seem worse, if you update you can test it yourself.
226 tokens ,I found
Yes, I think it's best to stay within that limit. I did modify the code a bit to allow it to run with the longer prompt too, but the results seem worse, if you update you can test it yourself.
tks,bro
Error occurred when executing CogVideoSampler:
prompt_embeds
andnegative_prompt_embeds
must have the same shape when passed directly, but got:prompt_embeds
torch.Size([1, 452, 4096]) !=negative_prompt_embeds
torch.Size([1, 226, 4096]).File "/data/ComfyUI/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/data/ComfyUI/execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/data/ComfyUI/execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "/data/ComfyUI/execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) File "/data/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/nodes.py", line 289, in process latents = pipeline["pipe"]( File "/etc/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/data/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/pipeline_cogvideox.py", line 391, in call self.check_inputs( File "/data/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/pipeline_cogvideox.py", line 225, in check_inputs