Open JasonLSC opened 1 year ago
Hi!
I have tested several scenarios using only sparse views (<12 cameras), but Tensor4D can not achieve satisfactory results in reconstructing foreground and background simultaneously. Currently, it may still require dense viewpoints to achieve background reconstruction.
In addition, you can use RobostVideoMatting or BackgroundMatting to segment moving objects/human, which are very fast and easy to run.
Hi! I tried to use RobostVideoMatting to segment moving objects in my multi-view videos, but I could not obtain high-quality reconstruction even with the mask in my scenario. My results are shown below:
Any idea about this situation? Should I try to move the mask object into the center of aabb? Any idea on how to determine the location of masked object in 3D space only with multi-view silhouettes?
In other words, do you have any suggestions on how to get the correct scale_mat
from multi-view silhouettes and estimated poses?
Hi! Thank you for your great work.
Could tensor4d be extended to handle scenes without masks? The multi-view captures in these scenes may contain foreground moving objects/human and static backgrounds.