Closed darthandvader closed 3 months ago
Hi, thanks for the attention. The rendering number is based on the training/testing split. We follow EndoSurf to get the dataset split. For example, we divide the pulling and cutting dataset into 8:1 training/testing split. You can find the detailed split strategy in the EndoNeRF_Dataset
function of the scene/endo_loader.py
Thanks for your reply! I'm also trying to figure out if you've ever tried converting the final shs back to RGB? Since the final SH would be n 16 3 and the SH2RGB function is linear which would only get 16*3 results.
Hi, the final SH (nx16x3) is actually features (the combination of colors and SH coefficients), and thus can not be directly converted to RGB. Its (nx16x3) function is mainly to participate in the rendering process to achieve view-dependent color modeling (render.py
).
If you want to get pure RGB of 3D Gaussians, you can use the first 3 channels of features (nx3), and then convert it to the color.
Hi, I found that the rendering number is based on the camera views from colmap, but the it is not matched with the number of training images. I wonder if you guys generated camera views of each frame or you did a train/test split somewhere. Thank you !