`convert_video(
model, # The loaded model, can be on any device (cpu or cuda).
input_source=input_folder, # A video file or an image sequence directory.
downsample_ratio=None, # [Optional] If None, make downsampled max size be 512px.
output_type='png_sequence', # Choose "video" or "png_sequence"
output_composition=output_folder+'/com', # File path if video; directory path if png sequence.
output_alpha=output_folder+'/alpha', # [Optional] Output the raw alpha prediction.
output_foreground=output_folder+'/foreground',# [Optional] Output the raw foreground prediction.
output_video_mbps=4, # Output video mbps. Not needed for png sequence. 4
seq_chunk=1, # Process n frames at once for better parallelism.
num_workers=0, # Only for image sequence input. Reader threads.
progress=True # Print conversion progress.
)`
it comes with the following errors:
.cache/torch/hub/PeterL1n_RobustVideoMatting_master/inference_utils.py", line 33, in __init__ self.container = av.open(path, mode='w') File "av/container/core.pyx", line 364, in av.container.core.open File "av/container/core.pyx", line 146, in av.container.core.Container.__cinit__ ValueError: Could not determine output format
I've traced back to inference.py, and the issue is in lines 104 and 106:
else: if output_composition is not None: writer_com = ImageSequenceWriter(output_composition, 'png') if output_alpha is not None: writer_pha = VideoWriter(output_alpha, 'png') if output_foreground is not None: writer_fgr = VideoWriter(output_foreground, 'png')
It should be:
else: if output_composition is not None: writer_com = ImageSequenceWriter(output_composition, 'png') if output_alpha is not None: writer_pha = ImageSequenceWriter(output_alpha, 'png') if output_foreground is not None: writer_fgr = ImageSequenceWriter(output_foreground, 'png')
The file inference.py has a small bug.
When I call convert_video as shown below:
`convert_video( model, # The loaded model, can be on any device (cpu or cuda). input_source=input_folder, # A video file or an image sequence directory. downsample_ratio=None, # [Optional] If None, make downsampled max size be 512px. output_type='png_sequence', # Choose "video" or "png_sequence" output_composition=output_folder+'/com', # File path if video; directory path if png sequence. output_alpha=output_folder+'/alpha', # [Optional] Output the raw alpha prediction. output_foreground=output_folder+'/foreground',# [Optional] Output the raw foreground prediction.
output_video_mbps=4, # Output video mbps. Not needed for png sequence. 4
)`
it comes with the following errors:
.cache/torch/hub/PeterL1n_RobustVideoMatting_master/inference_utils.py", line 33, in __init__ self.container = av.open(path, mode='w') File "av/container/core.pyx", line 364, in av.container.core.open File "av/container/core.pyx", line 146, in av.container.core.Container.__cinit__ ValueError: Could not determine output format
I've traced back to inference.py, and the issue is in lines 104 and 106:
else: if output_composition is not None: writer_com = ImageSequenceWriter(output_composition, 'png') if output_alpha is not None: writer_pha = VideoWriter(output_alpha, 'png') if output_foreground is not None: writer_fgr = VideoWriter(output_foreground, 'png')
It should be:
else: if output_composition is not None: writer_com = ImageSequenceWriter(output_composition, 'png') if output_alpha is not None: writer_pha = ImageSequenceWriter(output_alpha, 'png') if output_foreground is not None: writer_fgr = ImageSequenceWriter(output_foreground, 'png')