Open chenkang455 opened 1 month ago
Hi @LonsonZheng , thanks for your nice work. Will you kindly tell me how to write the code for evaluating RSIR on the real-world data (from .dat directly)?
seq, label, length = load_spike_numpy(cfg.simulated_dir+"spike-video{}-00000-light{}.npz".format(tag, ls))
, where can i get the label for the real-world data.
We don't have the label for the real-world data, and the tests on real scenarios in the paper are calculated using non-reference image quality metrics.
Hi @LonsonZheng , thanks for your nice work. Will you kindly tell me how to write the code for evaluating RSIR on the real-world data (from .dat directly)?
seq, label, length = load_spike_numpy(cfg.simulated_dir+"spike-video{}-00000-light{}.npz".format(tag, ls))
, where can i get the label for the real-world data.We don't have the label for the real-world data, and the tests on real scenarios in the paper are calculated using non-reference image quality metrics.
@LonsonZheng , thanks for your response. Bwt, i wonder if my script is right or not:
Q, Nd, Nl = cal_para()
q = torch.from_numpy(Q).cuda()
nd = torch.from_numpy(Nd).cuda()
nl = torch.from_numpy(Nl).cuda()
for i in range(10):
noisy_img = torch.ones([1, 1, spike.shape[2], spike.shape[3]], dtype=torch.float32).cuda()
noisy_img[0, 0, :, :] = spike[:,16*i:16*(i+1)].mean(dim=1).float()
if i == 0:
input = noisy_img
else:
input = torch.cat([noisy_img, fusion_out], dim=1)
fpn_denoise, img_true, fusion_out, denoise_out, refine_out, ft_denoise_out_d0, fgt_d0 = spkrecon_net(input, noisy_img, q, nd, nl)
recon_img = fusion_out
recon_img
is reconstructed after 10 iterations, i.e., 160 spike frames correspond to 1 output.
Hi @LonsonZheng , thanks for your nice work. Will you kindly tell me how to write the code for evaluating RSIR on the real-world data (from .dat directly)?
seq, label, length = load_spike_numpy(cfg.simulated_dir+"spike-video{}-00000-light{}.npz".format(tag, ls))
, where can i get the label for the real-world data.We don't have the label for the real-world data, and the tests on real scenarios in the paper are calculated using non-reference image quality metrics.
@LonsonZheng , thanks for your response. Bwt, i wonder if my script is right or not:
Q, Nd, Nl = cal_para() q = torch.from_numpy(Q).cuda() nd = torch.from_numpy(Nd).cuda() nl = torch.from_numpy(Nl).cuda() for i in range(10): noisy_img = torch.ones([1, 1, spike.shape[2], spike.shape[3]], dtype=torch.float32).cuda() noisy_img[0, 0, :, :] = spike[:,16*i:16*(i+1)].mean(dim=1).float() if i == 0: input = noisy_img else: input = torch.cat([noisy_img, fusion_out], dim=1) fpn_denoise, img_true, fusion_out, denoise_out, refine_out, ft_denoise_out_d0, fgt_d0 = spkrecon_net(input, noisy_img, q, nd, nl) recon_img = fusion_out
recon_img
is reconstructed after 10 iterations, i.e., 160 spike frames correspond to 1 output.
Yes, this snippet of code appears to be accurate.
Thanks a lot for your feedback !
Hi @LonsonZheng , thanks for your nice work. Will you kindly tell me how to write the code for evaluating RSIR on the real-world data (from .dat directly)?
seq, label, length = load_spike_numpy(cfg.simulated_dir+"spike-video{}-00000-light{}.npz".format(tag, ls))
, where can i get the label for the real-world data.