File "C:\Users\123user\Desktop\Video\eva\YOLO[1yoloseg.py](http://1yoloseg.py/)", line 13, in
results = model(source, conf=0.5, save=True, retina_masks=True, boxes=False) # list of Results objects
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[model.py](http://model.py/)", line 98, in call
return self.predict(source, stream, **kwargs)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[model.py](http://model.py/)", line 239, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[predictor.py](http://predictor.py/)", line 198, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils[_contextlib.py](http://_contextlib.py/)", line 56, in generator_context
response = gen.send(request)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[predictor.py](http://predictor.py/)", line 254, in stream_inference
for batch in self.dataset:
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\data[loaders.py](http://loaders.py/)", line 348, in next
raise FileNotFoundError(f'Image Not Found {path}')
FileNotFoundError: Image Not Found C:\Users\123user\Desktop\Video\eva\YOLO..\2Yuantu\000085.png
transparent-background
Settings -> Mode=fast, Device=cuda:0, Torchscript=disabled
000059.png: 15%|███████▌ | 59/393 [00:08<00:38, 8.71it/s]Traceback (most recent call last):
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\Scripts\transparent-background.exe[main.py](http://__main__.py/)", line 7, in
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\transparent_background[Remover.py](http://remover.py/)", line 277, in console
for img, name in loader:
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\transparent_background[utils.py](http://utils.py/)", line 144, in next
img = Image.open(self.images[self.index]).convert('RGB')
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[Image.py](http://image.py/)", line 916, in convert
self.load()
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[ImageFile.py](http://imagefile.py/)", line 288, in load
raise_oserror(err_code)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[ImageFile.py](http://imagefile.py/)", line 72, in raise_oserror
raise OSError(msg)
OSError: broken data stream when reading image file
fps: 30.0
key_min_gap: 3
key_max_gap: 10
key_th: 8
libpng error: IDAT: incorrect data check
Error completing request
Arguments: (1, 'C:\Users\123user\Desktop\456\eva', 'C:\Users\123user\Desktop\456\eva\1.mp4', -1, -1, 0, 0, True, True, '', '', 0, 11, 11, 3, 10, 8, True, 'hm-mkl-hm', True, False, False, 0, None, 1, 'mp4', '', 'Fit video length', 5, 0, 0, 'Normal') {}
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules[call_queue.py](http://call_queue.py/)", line 57, in f
res = list(func(*args, *kwargs))
File "C:\stable-diffusion-webui\modules[call_queue.py](http://call_queue.py/)", line 36, in f
res = func(args, *kwargs)
File "C:\stable-diffusion-webui\extensions\ebsynth_utility[ebsynth_utility.py](http://ebsynth_utility.py/)", line 85, in ebsynth_utility_process
ebsynth_utility_stage2(dbg, project_args, key_min_gap, key_max_gap, key_th, key_add_last_frame, is_invert_mask)
File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 153, in ebsynth_utility_stage2
keys = analyze_key_frames(frame_path, frame_mask_path, key_th, key_min_gap, key_max_gap, key_add_last_frame, is_invert_mask)
File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 90, in analyze_key_frames
edges = detect_edges( frame, get_mask_path_of_img( frame, mask_dir ), is_invert_mask )
File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 62, in detect_edges
im = im ( (mask == 0) if is_invert_mask else (mask > 0) )
TypeError: unsupported operand type(s) for *: 'NoneType' and 'bool'
These problems seem to be related to [libpng error: IDAT: incorrect data check]
I have tried searching on google and also tried chatgpt, but I still can't solve it. I really need help. If you need more information, please tell me and I will provide more information. Please work together to solve the problem. Thank you.
I have encountered the following similar problems when using multiple programs, including
libpng error: IDAT: incorrect data check
YOLO,
transparent-background,
stage2.py of ebsynth_utility (https://github.com/s9roll7/ebsynth_utility/blob/main/stage2.py)
The following problem occurred
YOLO
image 82/393 C:\Users\123user\Desktop\Video\eva\YOLO..\2Yuantu\000082.png: 640x384 1 person, 4.0ms
image 83/393 C:\Users\123user\Desktop\Video\eva\YOLO..\2Yuantu\000083.png: 640x384 1 person, 4.0ms
image 84/393 C:\Users\123user\Desktop\Video\eva\YOLO..\2Yuantu\000084.png: 640x384 1 person, 4.0ms
libpng error: bad adaptive filter value
Traceback (most recent call last):
File "C:\Users\123user\Desktop\Video\eva\YOLO[1yoloseg.py](http://1yoloseg.py/)", line 13, in
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[model.py](http://model.py/)", line 98, in call
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[model.py](http://model.py/)", line 239, in predict
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[predictor.py](http://predictor.py/)", line 198, in call
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils[_contextlib.py](http://_contextlib.py/)", line 56, in generator_context
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine[predictor.py](http://predictor.py/)", line 254, in stream_inference
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\data[loaders.py](http://loaders.py/)", line 348, in next
FileNotFoundError: Image Not Found C:\Users\123user\Desktop\Video\eva\YOLO..\2Yuantu\000085.png
transparent-background
Settings -> Mode=fast, Device=cuda:0, Torchscript=disabled 000059.png: 15%|███████▌ | 59/393 [00:08<00:38, 8.71it/s]Traceback (most recent call last): File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 86, in _run_code exec(code, run_globals) File "C:\Users\123user\AppData\Local\Programs\Python\Python310\Scripts\transparent-background.exe[main.py](http://__main__.py/)", line 7, in
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\transparent_background[Remover.py](http://remover.py/)", line 277, in console
for img, name in loader:
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\transparent_background[utils.py](http://utils.py/)", line 144, in next
img = Image.open(self.images[self.index]).convert('RGB')
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[Image.py](http://image.py/)", line 916, in convert
self.load()
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[ImageFile.py](http://imagefile.py/)", line 288, in load
raise_oserror(err_code)
File "C:\Users\123user\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL[ImageFile.py](http://imagefile.py/)", line 72, in raise_oserror
raise OSError(msg)
OSError: broken data stream when reading image file
stage2.py of ebsynth_utility
stage2
fps: 30.0 key_min_gap: 3 key_max_gap: 10 key_th: 8 libpng error: IDAT: incorrect data check Error completing request Arguments: (1, 'C:\Users\123user\Desktop\456\eva', 'C:\Users\123user\Desktop\456\eva\1.mp4', -1, -1, 0, 0, True, True, '', '', 0, 11, 11, 3, 10, 8, True, 'hm-mkl-hm', True, False, False, 0, None, 1, 'mp4', '', 'Fit video length', 5, 0, 0, 'Normal') {} Traceback (most recent call last): File "C:\stable-diffusion-webui\modules[call_queue.py](http://call_queue.py/)", line 57, in f res = list(func(*args, *kwargs)) File "C:\stable-diffusion-webui\modules[call_queue.py](http://call_queue.py/)", line 36, in f res = func(args, *kwargs) File "C:\stable-diffusion-webui\extensions\ebsynth_utility[ebsynth_utility.py](http://ebsynth_utility.py/)", line 85, in ebsynth_utility_process ebsynth_utility_stage2(dbg, project_args, key_min_gap, key_max_gap, key_th, key_add_last_frame, is_invert_mask) File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 153, in ebsynth_utility_stage2 keys = analyze_key_frames(frame_path, frame_mask_path, key_th, key_min_gap, key_max_gap, key_add_last_frame, is_invert_mask) File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 90, in analyze_key_frames edges = detect_edges( frame, get_mask_path_of_img( frame, mask_dir ), is_invert_mask ) File "C:\stable-diffusion-webui\extensions\ebsynth_utility[stage2.py](http://stage2.py/)", line 62, in detect_edges im = im ( (mask == 0) if is_invert_mask else (mask > 0) ) TypeError: unsupported operand type(s) for *: 'NoneType' and 'bool'
These problems seem to be related to [libpng error: IDAT: incorrect data check]
I have tried searching on google and also tried chatgpt, but I still can't solve it. I really need help. If you need more information, please tell me and I will provide more information. Please work together to solve the problem. Thank you.