Closed rafaelspring closed 3 years ago
Thank you!
a) 4-12h depending on scene size b) The network is very small. The most memory is consumed by the point cloud + its texture. So it depends again on the scene size. c) Novel views can be synthesized on the fly. Checkout the .gif of the adop_view in the readme!
Also have a question about the output result:
Is the result only an .mp4 video or is it also possible to export scenes as some kind of 3d model. For further export may be to 3d max/blender?
The underlying 3D model is a point cloud. You could export that but there would be no real benefit because other software is not able to interpret the neural color of each point.
The underlying 3D model is a point cloud. You could export that but there would be no real benefit because other software is not able to interpret the neural color of each point.
Could you not convert the point cloud to a model with something like Meshlab? Or would the detail loss be too much? (Also, even tough I need to intall Linux, I am exited to try this out, it looks very well made, and with lots of potential! Keep up the amazing work!)
Hi Darius, very impressive work! I'd like to understand better where this fits in in the overall landscape and therefore have some questions: a) The way I understand it ADOP needs to be trained individually per dataset, correct? If so, how long does such training take, roughly, as a function of # of input points or images? b) Once trained how large (GBs) will the resulting trained model, descriptors etc. be? c) If novel views are desired, does each novel view have to be pre-specified at training-time? Or can novel views be synthesized on-the-fly based on user input/navigation and the trained model?
Again, great work and super exciting stuff! Kind regards