williamyang1991 / FRESCO

[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
https://www.mmlab-ntu.com/project/fresco/
Other
731 stars 71 forks source link

video_blend error #44

Closed inferno46n2 closed 6 months ago

inferno46n2 commented 6 months ago

Thank you for updating the readme for the video_blend!

I am now getting this error when I try to run with your script. My folder is located ./videos/alien and within that folder I have two folders: keys and video which each contain 112 PNG files and they're labeled starting from 0000.png up to 0111.png

(diffusers) C:\Users\infer\OneDrive\Documents\Fresco\FRESCO>python video_blend.py ./videos/alien/ --key keys --key_ind 0 11 23 33 49 60 72 82 93 106 --output ./videos/alien/blend.mp4 --fps 24 --n_proc 4 -ps Base directory: ./videos/alien/ Key indices: [0, 11, 23, 33, 49, 60, 72, 82, 93, 106] Key directory: keys Number of sequences: 9 Process Process-3: Process Process-1: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, *self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 70, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 21, in split_feature assert h % num_splits == 0 and w % num_splits == 0 File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) AssertionError File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 70, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 21, in split_feature assert h % num_splits == 0 and w % num_splits == 0 AssertionError Process Process-4: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 70, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 21, in split_feature assert h % num_splits == 0 and w % num_splits == 0 AssertionError Process Process-2: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 70, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 21, in split_feature assert h % num_splits == 0 and w % numsplits == 0 AssertionError ebsynth: 2.203904628753662 [ WARN:0@3.359] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0000.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.365] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0001.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.367] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0002.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.368] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0003.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.370] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0004.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.372] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0005.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.374] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0006.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.375] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0007.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.377] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0008.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.379] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0009.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.381] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out0\0010.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.382] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0011.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.384] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0001.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.386] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0002.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.388] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0003.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.389] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0004.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.391] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0005.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.393] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0006.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.394] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0007.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.396] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0008.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.398] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out11\0009.jpg'): can't open/read file: check file path/integrity [ WARN:0@3.400] global loadsave.cpp:248 cv::findDecoder imread('./videos/alien/out_11\0010.jpg'): can't open/read file: check file path/integrity Traceback (most recent call last): File "video_blend.py", line 312, in main(args) File "video_blend.py", line 272, in main process_seq(video_sequence, i, blend_histogram, blend_gradient) File "video_blend.py", line 204, in process_seq dist1s.append(load_error(bin_a, img_shape)) File "video_blend.py", line 165, in load_error with open(bin_path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: './videos/alien/out_0\0001.bin'

inferno46n2 commented 6 months ago

I went into the EBsynth folder and enabled read and execute permissions and I turned on the log to true as per another closed comment but it didn't seem to fix it... maybe I did it wrong?

williamyang1991 commented 6 months ago

assert h % num_splits == 0 and w % num_splits == 0

you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct. It's best divisible by 64. For example, 512x512, 640x640, etc

inferno46n2 commented 6 months ago

assert h % num_splits == 0 and w % num_splits == 0

you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct. It's best divisible by 64. For example, 512x512, 640x640, etc

It's 1920 x 1080 res

inferno46n2 commented 6 months ago

assert h % num_splits == 0 and w % num_splits == 0

you can print h, num_splits, w to see what is wrong. Maybe because your video resolution is not correct. It's best divisible by 64. For example, 512x512, 640x640, etc

h: 135, num_splits: 2, w: 240 which seems to be fine

(diffusers) C:\Users\infer\OneDrive\Documents\Fresco\FRESCO>python video_blend.py ./output/alien/ --key keys --key_ind 0 11 23 33 49 60 72 82 93 106 --output ./output/alien/blend.mp4 --fps 24 --n_proc 4 -ps Base directory: ./output/alien/ Key indices: [0, 11, 23, 33, 49, 60, 72, 82, 93, 106] Key directory: keys Number of sequences: 9 h: 135, num_splits: 2, w: 240 Process Process-3: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature assert h % num_splits == 0 and w % num_splits == 0 AssertionError h: 135, num_splits: 2, w: 240 Process Process-4: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature assert h % num_splits == 0 and w % num_splits == 0 AssertionError h: 135, num_splits: 2, w: 240 Process Process-1: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature assert h % num_splits == 0 and w % num_splits == 0 AssertionError h: 135, num_splits: 2, w: 240 Process Process-2: Traceback (most recent call last): File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\infer.conda\envs\diffusers\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, self._kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 115, in process_sequences process_one_sequence(i, video_sequence) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\video_blend.py", line 82, in process_one_sequence flow_calc.get_flow(i1, i2, flow_seq[j]) File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "./src/ebsynth\flow\flow_utils.py", line 174, in get_flow results_dict = self.model(image1, File "C:\Users\infer.conda\envs\diffusers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\gmflow.py", line 133, in forward feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 72, in feature_add_position feature0_splits = split_feature(feature0, num_splits=attn_splits) File "C:\Users\infer\OneDrive\Documents\Fresco\FRESCO\src\ebsynth\deps/gmflow\gmflow\utils.py", line 23, in split_feature assert h % num_splits == 0 and w % numsplits == 0 AssertionError ebsynth: 2.9724221229553223 [ WARN:0@4.183] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0000.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.186] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0001.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.188] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0002.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.190] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0003.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.192] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0004.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.194] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0005.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.197] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0006.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.199] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0007.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.201] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0008.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.203] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0009.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.205] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out0\0010.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.207] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0011.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.209] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0001.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.211] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0002.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.212] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0003.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.214] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0004.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.216] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0005.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.217] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0006.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.219] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0007.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.221] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0008.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.223] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out11\0009.jpg'): can't open/read file: check file path/integrity [ WARN:0@4.224] global loadsave.cpp:248 cv::findDecoder imread('./output/alien/out_11\0010.jpg'): can't open/read file: check file path/integrity Traceback (most recent call last): File "video_blend.py", line 312, in main(args) File "video_blend.py", line 272, in main process_seq(video_sequence, i, blend_histogram, blend_gradient) File "video_blend.py", line 204, in process_seq dist1s.append(load_error(bin_a, img_shape)) File "video_blend.py", line 165, in load_error with open(bin_path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: './output/alien/out_0\0001.bin'

(diffusers) C:\Users\infer\OneDrive\Documents\Fresco\FRESCO>

williamyang1991 commented 6 months ago

h: 135, num_splits: 2, w: 240 is not fine...

h % num_splits == 1 which will trigger AssertionError for assert h % num_splits == 0 and w % num_splits == 0

inferno46n2 commented 6 months ago

h: 135, num_splits: 2, w: 240 is not fine...

h % num_splits == 1 which will trigger AssertionError for assert h % num_splits == 0 and w % num_splits == 0

Ah haha I’m admittedly using Claude to trouble shoot as I have no clue what that means 😂

I’ll change my res to 1920 x 1024. Both numbers are divisible by 64

inferno46n2 commented 6 months ago

h: 135, num_splits: 2, w: 240 is not fine...

h % num_splits == 1 which will trigger AssertionError for assert h % num_splits == 0 and w % num_splits == 0

1920 x 1024 worked but blue screened me (4090)

1280 x 704 is working but it’s been over and hour and still running at literal max GPU usage (can’t even open a file explorer window it’s that stressed)

why is it so slow? I do this is in regular EB in less than 10 mins but I wanted to try your PS command

williamyang1991 commented 6 months ago

You should ask ebsynth's authors why the released ebsynth code is slow than the ebsynth software. I just use that code. I'm not responsible for optimizing the code to match the performance of the mature and blackbox software.

kaitosea916 commented 6 months ago

I have a similar error.

The input video is of the boxer in the example. The resolution is specified to 256 due to low spec GPU.

Please let me know if you find the cause and how to resolve it.

python video_blend.py output/boxer-punching-towards-camera --key keys --key_ind 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30  --output output/boxer-punching-towards-camera/blend.mp4 --fps 10 --n_proc 4 -ps
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
ebsynth: 35.95934724807739
[ WARN:0@37.351] global loadsave.cpp:248 findDecoder imread_('output/boxer-punching-towards-camera/out_0/0001.jpg'): can't open/read file: check file path/integrity
[ WARN:0@37.352] global loadsave.cpp:248 findDecoder imread_('output/boxer-punching-towards-camera/out_2/0001.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "video_blend.py", line 308, in <module>
    main(args)
  File "video_blend.py", line 268, in main
    process_seq(video_sequence, i, blend_histogram, blend_gradient)
  File "video_blend.py", line 200, in process_seq
    dist1s.append(load_error(bin_a, img_shape))
  File "video_blend.py", line 161, in load_error
    with open(bin_path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'output/boxer-punching-towards-camera/out_0/0001.bin'
Traceback (most recent call last):
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/queueing.py", line 388, in call_prediction
    output = await route_utils.call_process_api(
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/route_utils.py", line 219, in call_process_api
    output = await app.get_blocks().process_api(
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/blocks.py", line 1440, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/blocks.py", line 1341, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/video.py", line 281, in postprocess
    processed_files = (self._format_video(y), None)
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/video.py", line 355, in _format_video
    video = self.make_temp_copy_if_needed(video)
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/base.py", line 226, in make_temp_copy_if_needed
    temp_dir = self.hash_file(file_path)
  File "/root/anaconda3/envs/diffusers/lib/python3.8/site-packages/gradio/components/base.py", line 190, in hash_file
    with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'output/boxer-punching-towards-camera/blend.mp4'
williamyang1991 commented 6 months ago

@kaitosea916 Your problem is different. Your ebsynth didn't work, so there is no 0001.bin Please refer to https://github.com/williamyang1991/Rerender_A_Video#issues

kaitosea916 commented 6 months ago

@williamyang1991 Thank you for your quick response. As per issus, I gave execute permission to debs/ebsynth/bin/ebsynth and it worked. It took 7 minutes to run and the output video was a bit inaccurate, probably because of the low spec GPU... I look forward to your future activities.

duanjiding commented 6 months ago

I went into the EBsynth folder and enabled read and execute permissions and I turned on the log to true as per another closed comment but it didn't seem to fix it... maybe I did it wrong?

woo, it works, thx