Open RyanPham19092002 opened 2 weeks ago
It was a bit long to give the details, in summary, we use this function to determine whether the sampled points are in the overlap region or not. then, render it back to the image plane by summing all the points in the rays to determine whether the pixel is in an overlap region or not.
Hi, thank you for your reply.
I have 4 additional questions about this function:
Why you use feature from encoder to determine the sampled points are in the overlap region (and then render it back to determine the pixel is in overlap region or not) instead of using directly rgb input?
Do you have any workaround instead of using voxel to determine the pixel in an overlap region? (I don't want to use voxel in stage 2, I just want to use the rgb image and feature to render a novel view at stage 1)
If there is just one way to determine the pixel in the overlap region by voxel from the feature image (like your code, which shape of the mask_mems is [B,S,C,Z,Y,X] , B = 1, S = 6 and feat_mem after reduce mask has shape [B,C,Z,Y,X], B = 1), how can I project this feature again into an image that creates a mask image like in your paper?
Could you please give me any documents or paper to determine pixel in overlap region that you refer ?
Thank you for your time. Looking forward to hearing you soon
Hi. Thanks for your great work.
I'm try to create your stage 1 of your model. I'm create a neural network to training opacity map, rotation map, scaling map based on model GPS-Gaussian. After that I give it to Gaussian Splatting to training stage1.
However, the results are bad when training like the image below: The image when render in novel view:
How can I create a mask like the paper mentioned to improve the result training? Could you please give me the paper or related document use this technique ?
Thank you so much and wishing you have a great day.