Caoang327 / HexPlane

Official code for CVPR 2023 Paper, HexPlane: A Fast Representation for Dynamic Scenes
MIT License
242 stars 24 forks source link

about the result and code of the IPhone dataset #7

Closed RedemptYourself closed 9 months ago

RedemptYourself commented 12 months ago

can you give the result and code of the IPhone dataset ? its very important for the exp of real monocular data, thanks!

RedemptYourself commented 12 months ago

我自己尝试实现了关于hypernerf数据集的读取实验,但是似乎效果与论文中同为monocular数据集的iphonedataset 效果有较大出入 原因可能是因为参数以及loss的设置 能否给出关于iPhone数据集的具体实验结果与参数设置? 万分感谢

Caoang327 commented 12 months ago

Hi. Working on monocular videos is am important direction and HexPlane doesn't work very well under monocular settings as it is an extremely ill-posed problem. IPhone dataset has depths supervision while Hypernerf doesn't provide depth. I tried HexPlane using HyperNeRF dataset and it doesn't work very well, not sure whether it is because of dataloader or monocular settings. It would be really exciting to extend HexPlane to monocular settings. Could you message me your email and I could share my dataloader (potentially wrong) to you?

RedemptYourself commented 12 months ago

grateful for your sharing!my email is 809207013@qq.com,thanks a lot

a1600012888 commented 12 months ago

Hi Cao,

Thanks for sharing.

I am very curious about Hex-plane's results on monocular setting.

In Figure-7 of your main paper, you showed some of the results of two video sequences (mochi and paper-windmill) on the iphone dataset. But it seems that config and code are not provided for this dataset. I have a two questions related to it.

  1. Did you use depth supervision or mask supervision for it?
  2. Is the model configuration quite different for the iphone dataset?
Caoang327 commented 11 months ago

Hi Tianyuan:

  1. Yes I used the depth supervision, which is the same as the DyCheck Paper.
  2. It is not that different except I use the depth supervision. The major change here is: I use a relative high TV_t_s_ratio, like 100-500, which resulting a very high TV loss along time axis.

The reason I don't put the monocular video code in the github is: HexPlane currently works well on several scenes but works bad on others. In general, HexPlane doesn't have a deformation field so its results could not be that great for monocular videos. But I found there are duplicate/diverging objects in the test views and it is a little bit strange. So I suspected that there might be some problems with my cameras/depths code and I planned to revisit it when I have time. Since I am not sure if the code is correct, I don't put them.

Sorry for the inconvenience.

a1600012888 commented 11 months ago

Thanks Ang Cao.

RedemptYourself commented 11 months ago

Hi Cao,

Thanks for sharing.

I have tried add the deformnet and the vanila tensorf ,the result cant be optimized well,even it cant learn the dynamic。 不知道您是否也尝试了类似的设置,我觉得是因为tensorf本身 vector与matrix 相乘以及deform的坐标映射 导致deform难以映射到正确的tensorf的索引而难以优化,另外 您提到的 duplicate/diverging objects 是否是指因为hypernerf或者dycheck的实验设置,只在train view上有正确的渲染,而在test视角出现的几何结构错误

AngeLouCN commented 10 months ago

@Caoang327 @RedemptYourself Hi, thank you for your excellent work. Could you also share the iPhone dataloader to me? And my email is angelou@gwmail.gwu.edu Thank you very much.

Caoang327 commented 10 months ago

Hi Cao,

Thanks for sharing.

I have tried add the deformnet and the vanila tensorf ,the result cant be optimized well,even it cant learn the dynamic。 不知道您是否也尝试了类似的设置,我觉得是因为tensorf本身 vector与matrix 相乘以及deform的坐标映射 导致deform难以映射到正确的tensorf的索引而难以优化,另外 您提到的 duplicate/diverging objects 是否是指因为hypernerf或者dycheck的实验设置,只在train view上有正确的渲染,而在test视角出现的几何结构错误

If the deformation is neural network and canonical space is explicit representation, it should be fine. You can refer https://github.com/hustvl/TiNeuVox.

For Iphone dataset, we have correct results on training but wrong results in test set. I don't know why.

Caoang327 commented 10 months ago

@Caoang327 @RedemptYourself Hi, thank you for your excellent work. Could you also share the iPhone dataloader to me? And my email is angelou@gwmail.gwu.edu Thank you very much.

sent.

yavon818 commented 10 months ago

@Caoang327 @RedemptYourself Hi, thank you for your excellent work. I am really curious about the results of running on the iphone dataset. Could you also share the iPhone dataloader to me? And my email is zhangyawen818@gmail.com Thanks a lot.

ImJongminPark commented 9 months ago

Hello, Thank you very much for your great effort! I am also very curious about the result for iphone dataset. Could you also share the iPhone dataloader to me? My email is jm.park@kaist.ac.kr Thank you.

yavon818 commented 9 months ago

Sorry,the author has not sent me that yet,lol

获取 Outlook for iOShttps://aka.ms/o0ukef


发件人: ImJongminPark @.> 发送时间: Thursday, September 7, 2023 9:26:50 PM 收件人: Caoang327/HexPlane @.> 抄送: yavon818 @.>; Comment @.> 主题: Re: [Caoang327/HexPlane] about the result and code of the IPhone dataset (Issue #7)

Hello, Thank you very much for your great effort! I am also very curious about the result for iphone dataset. Could you also share the iPhone dataloader to me? My email is @.**@.> Thank you.

― Reply to this email directly, view it on GitHubhttps://github.com/Caoang327/HexPlane/issues/7#issuecomment-1710151095, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BB45IUZJBAQ73OWZMNNIRILXZHDRVANCNFSM6AAAAAAZZ4T3OI. You are receiving this because you commented.Message ID: @.***>

Caoang327 commented 9 months ago

Hi:

The code is here. https://drive.google.com/file/d/1hkSVyy05f1pP4sDJSjqNFUobpkSVGT_g/view?usp=sharing Sorry for the late response ( I haven't check the github and email for several weeks).