Open ahmed-nady opened 4 years ago
The test script loads all images into memory upfront. You can modify the code or write your own testing script to load only the currently required frames for each processing step. It could look like this:
[...]
#### read LQ and GT images
## commented old code that loads all frames:
# imgs_LQ = data_util.read_img_seq(subfolder)
[...]
# process each image
for img_idx, img_path in enumerate(img_path_l):
img_name = osp.splitext(osp.basename(img_path))[0]
select_idx = data_util.index_generation(img_idx, max_idx, N_in, padding=padding)
## new code:
selected_imgs = []
for i in range(N_in):
idx = select_idx[i]
selected_imgs.append(img_path_l[idx])
imgs_in = data_util.read_img_seq(selected_imgs).unsqueeze(0).to(device)
## commented old code:
# imgs_in = imgs_LQ.index_select(0, torch.LongTensor(select_idx)).unsqueeze(0).to(device)
[...]
Thanks @JensDA. i write my own script to accomplish it as follow:
`prep_frame_lst = deque(maxlen=5) # make a new deque with 5 items
while(True):
ret, frame = cap.read()
if not ret or currentFrame > (length-1):
print('Done processing')
break
prep_frame =prepare_frame(frame)
currentFrame += 1
if currentFrame< 5:
prep_frame_lst.append(prep_frame)
else:
prep_frame_lst.append(prep_frame)
# stack to Torch tensor
imgs_LQ = get_Torch_tensor_from_prep_frame_lst(prep_frame_lst)
# process each image
count +=1
img_name = "frame_%05i" % count
select_idx = data_util.index_generation(3, 5, N_in, padding=padding)
imgs_in = imgs_LQ.index_select(0, torch.LongTensor(select_idx)).unsqueeze(0).to(device)
output = util.single_forward(model, imgs_in)
output = util.tensor2img(output.squeeze(0))
if save_imgs:
cv2.imwrite(osp.join(save_subfolder, '{}.png'.format(img_name)), output)
`
I test your blur_comp on my image sequence (15 images) with resolution 1600 * 1200, and the results seem pretty good. However, I want to test on the whole video (236 frames) but I faced this message error on colab " tcmalloc: large alloc 5414404096 bytes". Can you help me to resolve this issue?