Closed bchretien closed 7 months ago
This seems like it would be a good feature to have, but I'm not sure how best to integrate it. Specifically I'd like to avoid having to specify a parameter when loading a file to indicate whether it is an "anti-aliased" or "classic" splat. Ideally that information would be encoded into the file itself -- Does the Nerfstudio code support that?
As for updating my viewer, it looks like it would be super easy, in fact the authors of the mip-splatting work created a web viewer (based on mine) for that purpose, and it looks like they just made a small update to the vertex shader in SplatMesh.js
:
// compute the coef of alpha based on the detemintant
float kernel_size = 0.1;
float det_0 = max(1e-6, cov2Dm[0][0] * cov2Dm[1][1] - cov2Dm[0][1] * cov2Dm[0][1]);
float det_1 = max(1e-6, (cov2Dm[0][0] + kernel_size) * (cov2Dm[1][1] + kernel_size) - cov2Dm[0][1] * cov2Dm[0][1]);
float coef = sqrt(det_0 / (det_1+1e-6) + 1e-6);
if (det_0 <= 1e-6 || det_1 <= 1e-6){
coef = 0.0f;
}
cov2Dm[0][0] += kernel_size;
cov2Dm[1][1] += kernel_size;
vColor.a *= coef;
@mkkellogg thanks for the quick response! Alas this is not encoded in the exported PLY file, and the format itself makes it impractical. Until we have #47 where such metadata would be easy to store and retrieve, I'm afraid we're stuck with a parameter that needs to be specified manually.
Hello, I want to fork this repo and add this code, but I don't know where to put the code:
// compute the coef of alpha based on the detemintant
float kernel_size = 0.1;
float det_0 = max(1e-6, cov2Dm[0][0] * cov2Dm[1][1] - cov2Dm[0][1] * cov2Dm[0][1]);
float det_1 = max(1e-6, (cov2Dm[0][0] + kernel_size) * (cov2Dm[1][1] + kernel_size) - cov2Dm[0][1] * cov2Dm[0][1]);
float coef = sqrt(det_0 / (det_1+1e-6) + 1e-6);
if (det_0 <= 1e-6 || det_1 <= 1e-6){
coef = 0.0f;
}
cov2Dm[0][0] += kernel_size;
cov2Dm[1][1] += kernel_size;
vColor.a *= coef;
Anyway, If we apply antialiased method for all PLY. May it broken any PLY that created from other software? (Inria 3DGS, Luma, Polycam, OpenSplat, etc) ?
I am the author of antialiasing mode of splatfacto , the code change needed to support web viewer is small (but slightly different from mipsplatting).
https://github.com/nerfstudio-project/gsplat/blob/main/gsplat/_torch_impl.py#L188
Currently it is not possible to store any metadata in PLY format. Any suggestions for us to move forward?
For short term, it makes sense to have a keyboard hotkey to toggle between classic and antialiasing mode.
For my viewer, I think it makes sense to add a parameter to the Viewer.addSplatScene()
and Viewer.addSplatScenes()
functions to indicate the mode in which the scene should be rendered. How does that sound?
For my viewer, I think it makes sense to add a parameter to the
Viewer.addSplatScene()
andViewer.addSplatScenes()
functions to indicate the mode in which the scene should be rendered. How does that sound?
Sounds good to me. Let me know if you have any confusion about how to compute opacity compensation factor. I can help double check the code change you made.
@jb-ye: if @mkkellogg is OK with it, maybe you could provide a demo file as well? The backpack model from https://github.com/nerfstudio-project/gsplat/pull/140 makes for a really compelling argument for this feature, and it would make testing easier.
@jb-ye It would definitely be helpful if you could provide a demo file. Then I can take a stab at implementing the computation for the opacity compensation factor.
Will do so when I got time this week. @bchretien @mkkellogg
https://drive.google.com/file/d/19e0iAsoc9F26ilM4s6g0Y6n0n9-WRjV5/view?usp=drive_link Please check out this link for a sample ply asset.
Left: classic mode rendering of antialiased asset; Right: antialiased mode
@jb-ye Thanks! I'll try to make the update in the next couple of days and let you know when it's ready.
@jb-ye Would you be able to share the cameras.json
(or equivalent parameters) for the above scene?
@mkkellogg What do you mean by cameras.json
. Are they training camera parameters? I assume they doesn't matter for rendering, right?
Well I noticed that using the standard calculation for focal length using Three.js projection matrix is producing sub-optimal results in terms of render quality (subjective for sure), so I thought I'd play around with them to see if I can get better results by matching the training camera parameters.
Just want to double check your formula implemented. Here is the reference implementation from splatfacto/gsplat lib, where blurring kernel is 0.3 pixel and compensation is clamped instead of adding a tiny round-off eps. This is different from the one used by mip-splatting.
det_orig = cov2d[..., 0, 0] * cov2d[..., 1, 1] - cov2d[..., 0, 1] * cov2d[..., 0, 1]
cov2d[..., 0, 0] = cov2d[..., 0, 0] + 0.3
cov2d[..., 1, 1] = cov2d[..., 1, 1] + 0.3
det_blur = cov2d[..., 0, 0] * cov2d[..., 1, 1] - cov2d[..., 0, 1] * cov2d[..., 0, 1]
compensation = torch.sqrt(torch.clamp(det_orig / det_blur, min=0))
Here is the training camera:
"w": 899,
"h": 1600,
"fl_x": 1337.7803526580215,
"fl_y": 1338.3579272604447,
"cx": 449.5,
"cy": 800.0
Also spherical harmonics are used for rendering.
Yep that's that math I'm using. I got something working that I think looks pretty good:
I used this code in the shader:
float detOrig = cov2Dm[0][0] * cov2Dm[1][1] - cov2Dm[0][1] * cov2Dm[0][1];
cov2Dm[0][0] += 0.3;
cov2Dm[1][1] += 0.3;
float detBlur = cov2Dm[0][0] * cov2Dm[1][1] - cov2Dm[0][1] * cov2Dm[0][1];
float compensation = sqrt(max(detOrig / detBlur, 0.0));
vColor.a *= compensation;
Fantastic
Thank you for your help on this!
That is a great news ! Can't wait to test it !
My experiment with Truck
dataset
Left = Splatfacto-big with classic mode Right = Splatfacto-big with antialiased mode
This is now officially supported in this release: https://github.com/mkkellogg/GaussianSplats3D/releases/tag/v0.3.6
@mkkellogg thanks if you want to use half precision to store covariance, you would notice some quality loss for antialiasing mode. The fix is instead of saving the covariance in half precision, saving its Cholesky decomposition in half precision, and multiple it back to get the full covariance of full precision at usage time.
I have fantastic results with AA splats but still some issues with highly contrasted models from a "standard" pov, (close to the training images), the details are undoubtly better. Left : with AA , right no AA
But when I get my camera further, I get some weird white artifacts on the AA model. (the non AA model still looks bad but don't have those artifacts.
Any idea about this problem ?... I guess it has something to do with transparency, since whats is behind the fabric in my model is white. (I mean the big splats "inside" the bag)
@gonzalle did you observe similar issue in nerfstudio viewer? I think it might relate to spherical harmonics not used in the web viewer.
@gonzalle did you observe similar issue in nerfstudio viewer? I think it might relate to spherical harmonics not used in the web viewer.
Indeed ! the nerfstudio viewer rendering was clean from any distance... Is there any trick around ? beside having less textured material ?
@gonzalle did you observe similar issue in nerfstudio viewer? I think it might relate to spherical harmonics not used in the web viewer.
Indeed ! the nerfstudio viewer rendering was clean from any distance... Is there any trick around ? beside having less textured material ?
Maybe try to reduce directional lights during capture, use more ambient lighting
@gonzalle Would you be willing to share your model? Maybe I can do some troubleshooting on my end.
@gonzalle Would you be willing to share your model? Maybe I can do some troubleshooting on my end.
Certainly, I just have to train again my model... I hope it'll be fast... gime a couple hours :))
aa_weird.zip Mark, Here is the file... As I had to re-train it and edit it, I noticed that nerfstudio (when at low res) and Supersplat, both show the same issue. For me it's something that has to do with the big white splat behind the fabric texture and some kind of "mip mapping" issues...
You'll have to turn around the model to display the good face , then go further from it to see the issue.
So I have also noticed this issue, and I don't know exactly what is happening but it does seem like it's caused by splats that are much bigger than the other splats in the scene. I added a super hacky viewer parameter called focalAdjustment
that I have discovered helps. Try something like this when you run the viewer:
const viewer = new GaussianSplats3D.Viewer({
'focalAdjustment': 3.0
});
The default value for focalAdjustment
is 1.0, increasing it seems to make these kinds of artifacts less noticeable. I want to reiterate this is a big ol' hack, and I don't know exactly why it helps :)
Great, I will try that. Thanks a lot ! Gaussian Splatting, as a global techniqu,e and as it stands today, still has some issues. In any case, I firmly believe that such a technique, combined with a good web viewer, is a fantastic step forward! Keep on the fantastic job !
Hi!
First of all, thanks for the project and kudos for the code quality and the comments (e.g. in
SplatMesh.js
). I was checking your viewer after playing around with Nerfstudio. I enabled the new "antialiasing" version of their rasterizer (see motivation here and integration in Nerfstudio there), but the problem is that it requires explicit support in viewers down the line. To summarize, a compensation factor would need to be computed in the vertex shader and applied to the gaussian opacities.Would that feature be in the scope of this project? If it is, I could provide a PR. Since rasterization is done differently in this viewer, I guess the compensation would need to be applied to
vColor.a
directly?