Closed Furenchampion closed 3 years ago
Hi @Furenchampion, yes -- you can modify data/iphone.py
to have it read from your sequence.
As some quick instructions, you should make sure that get_image()
reads your images properly, and the raw image sizes (self.raw_H
, self.raw_W
) and focal length (self.focal
) are set according to your iPhone specs. You may need to look up how to compute the corresponding focal length; the current values I have in data/iphone.py
is for iPhone 12.
You won't need camera pose information so the pose
variable is just set to a zero vector. Hope this helps!
I've added the above instructions to the README. Closing for now, please feel free to reopen if you have further questions!
Hello Chenhsuan! Thanks for sharing your code! I got a problem with your experiments on my own dataset, which is shot on iPhone. I see your video which introduces BARF on youtube. In the end, you give the results on your life sequences such as living room or kitchen. could you tell me how can I apply your approach to my dataset, so that I can try novel view synthesis by just providing an image folder with no pre-computed camera poses ?