Closed LeoMusk closed 4 months ago
It's weird one, all I can think of is either some dependency version issue or the resize node you are using as I'm not familiar with it and what it returns.
thanks,I think it should be a node issue with modifying the size of the image. Just remove it
I am a technical scumbag.Is there any solution to the error below? thx
Error occurred when executing EasyAnimateSampler:
The image to be converted to a PIL image contains values outside the range [0, 1], got [-0.07665368914604187, 1.0637787580490112] which cannot be converted to uint8.
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\nodes.py", line 227, in process sample = pipeline( File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\easyanimate\pipeline\pipeline_easyanimate_inpaint.py", line 999, in call inputs = self.clip_image_processor(images=clip_image, return_tensors="pt") File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_processing_utils.py", line 551, in call return self.preprocess(images, kwargs) File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 323, in preprocess images = [ File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 324, in self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format) File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 191, in resize return resize( File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_transforms.py", line 326, in resize do_rescale = _rescale_for_pil_conversion(image) File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_transforms.py", line 150, in _rescale_for_pil_conversion raise ValueError(