Kunhao-Liu / StyleRF

[CVPR 2023] StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
https://kunhao-liu.github.io/StyleRF/
138 stars 10 forks source link

How to calculate LPIPS and RMSE? #7

Closed PaiDii closed 1 year ago

PaiDii commented 1 year ago

Should I warp the first frame image to the second frame image for measuring LPIPS and RMSE, and then use the py you provided compute_Metrics.py? What is the use of the optical flow you mentioned?

Kunhao-Liu commented 1 year ago

Hi, for the multi-view consistency evaluation, we use RAFT to compute the optical flow. Then, we use softmax-splatting to warp the image and get the mask. Finally, we compute the RMSE score and the LPIPS score between the warped image and the target image. The compute_metrics.py is used to evaluate the quality of novel view synthesis rather than multiview consistency.

BPAWD commented 1 year ago

Hi, I try RAFT and softmax-splatting to warp the image and successfully get the warped result. But I don't know how to get the valid mask. In the softmax-splatting example code, it does not return the mask. Could you provide more details on how to get the mask of valid warped region?

Kunhao-Liu commented 1 year ago

Hi, we treat the pixels that are totally black (with color [255,255,255]) as the mask, which means that no pixels from the previous frame are warped to this pixel.

BPAWD commented 1 year ago

Got it. It makes sense in most of the cases, except the black object/region, which may be wrongly masked out in this case. Will it affect the results?

Kunhao-Liu commented 1 year ago

Hi, in practice, this might not have a significant impact on the results since truly black pixels are relatively rare in real-world scenes.

zAuk000 commented 11 months ago

Hi, I try RAFT and softmax-splatting to warp the image and successfully get the warped result. But I don't know how to get the valid mask. In the softmax-splatting example code, it does not return the mask. Could you provide more details on how to get the mask of valid warped region?

Hi,could you provide me the code for the metrics after caculating the flow?I use resample as the warpped fuction but I did not realize how to use the mask to get the final result,Thanks a lot!

zAuk000 commented 11 months ago

Hi, in practice, this might not have a significant impact on the results since truly black pixels are relatively rare in real-world scenes.

Hi, could I request a source code for caculating the metrics? I use the method from https://github.com/linfengWen98/CAP-VSTNet/issues/11,but I don't know how to use the mask to get the final results,Thanks a lot!

jly0810 commented 10 months ago

您好,我们将全黑的像素(颜色为 [255,255,255])视为掩模,这意味着前一帧中的任何像素都不会扭曲到该像素。

您好 我想知道您在选择短期一致性和长期一致性测试时相邻两帧的标准是什么?因为位移过大时光流质量会大大折损。

HeChengy commented 9 months ago

你好,我尝试使用 RAFT 和 softmax-splatting 来扭曲图像并成功获得扭曲结果。但我不知道如何获得有效的面具。在softmax-splatting示例代码中,它不返回掩码。您能否提供有关如何获取有效扭曲区域蒙版的更多详细信息?

你好,您可以提供一下使用"RAFT 和 softmax-splatting 来扭曲图像并成功获得扭曲结果“的完整代码吗?谢谢。

HeChengy commented 9 months ago

您好,实际上,这可能不会对结果产生重大影响,因为真正的黑色像素在现实世界中的情况相对较少。

您好,我可以请求计算指标的源代码吗?我使用的是linfengWen98/CAP-VSTNet#11的方法,我不知道如何使用mask来得到最终结果,非常感谢!

你好,您可以分享一下您不适用mask计算指标的完整代码吗?谢谢。

zAuk000 commented 9 months ago

您好,实际上,这可能不会对结果产生重大影响,因为真正的黑色像素在现实世界中的情况相对较少。

您好,我可以请求计算指标的源代码吗?我使用的是linfengWen98/CAP-VSTNet#11的方法,我不知道如何使用mask来得到最终结果,非常感谢!

你好,您可以分享一下您不适用mask计算指标的完整代码吗?谢谢。

你好,我直接使用的上面的链接中提供的代码,这个领域目前似乎没有统一的量化标准,根据个人观察 temporal loss 只是一种侧面反映视频连贯性的指标,具体如果两段视频的 loss 相差不是过大肉眼观察不出来明显区别