Q-Future / Q-Align

③[ICML2024] [IQA, IAA, VQA] All-in-one Foundation Model for visual scoring. Can efficiently fine-tune to downstream datasets.
https://q-align.github.io
Other
291 stars 19 forks source link

About the number of input video frames #36

Open elvindp opened 1 month ago

elvindp commented 1 month ago

Hi, when l use the example python api to inference a video with 15mins, it consume large GPU memory and report torch.cuda.OutOfMemoryError. I find the script "load_video" extract 1 frame per second, which need 900 frames for 15min video and cost large memory. So is there any suggested number of input frames for long videos? for example, one video 30 frames?

CarlCloudWang commented 1 month ago

sample some frames per second... def load_video(video_file, skip=1): from decord import VideoReader vr = VideoReader(video_file)

# Get video frame rate
fps = vr.get_avg_fps()

# Calculate frame indices for 1fps
frame_indices = [int(fps * i) for i in range(int(len(vr) / fps))]
frames = vr.get_batch(frame_indices).asnumpy()
ls = []

if skip == 1 or skip is None:
    ls = [Image.fromarray(frames[i]) for i in range(int(len(vr) / fps))]
elif skip > 1:
    ls = [Image.fromarray(frames[i]) for i in range(int(len(vr) / fps))][::skip]  # sample frame
elif skip < 0:
    ls = [Image.fromarray(frames[i]) for i in range(int(len(vr) / fps))][:-skip]  # skip<0 and get 0 to -skip frame

return ls