Open JunhyeongDoyle opened 1 week ago
Hi, I see a similar issue on a custom dataset (using no_prior, after 80k iterations), with blurry motions. It also looks like, at the last frames of the training section, there is something which looks like 'ghosting' from earlier frames. You can see the hair of all 3 the dancers move up a bit, as they were in some frames before.
The point_cloud.ply and other files are available here
is this a problem with the dataset, or is there a bug in the no_prior or training script or my choice of hyper-params? This is what I used as config:
{ "resolution": 2, "model": "ours_lite", "scaling_lr": 0.005, "trbfslinit": 2.0, "preprocesspoints": 0, "test_iteration": 25000, "densify": 20, "desicnt": 12, "duration": 20, "rgbfunction": "None", "loader": "technicolor", "rdpip": "train_ours_lite" }
Any ideas what it could be causing these artifacts?
Hello, thank you for your amazing work on this project! I’ve been making good progress using the Techni full config for training on the Painter sequence. While the results are impressive for static backgrounds, I’m encountering an issue where dynamic objects (e.g., moving people) appear too blurry compared to the rest of the scene. I’ve experimented with the following, but I haven’t seen significant improvements:
Adjusting the training iterations (increased up to 200%). Tweaking various config parameters Applying different augmentation techniques, but the blurriness issue persists.
Question 1 : Could you please suggest any tips or potential adjustments I could try to resolve this issue? Are there specific config parameters or training strategies you recommend to handle dynamic objects more effectively?
And I have one more question about custom viewport rendering with camera extrinsic parameters
I’d like to render a scene from a custom viewport by manually specifying the extrinsic parameters (X, Y, Z, Yaw, Pitch, Roll). I’m looking for guidance on which parts of the code I should modify to implement this feature.
Questions 2 : Where in the codebase should I start to modify the extrinsic parameters for custom viewport rendering? How should I handle the intrinsic parameters of the camera in this context? Should I adjust the focal length, principal point, etc., or can they be left unchanged? Any advice or pointers would be greatly appreciated. Thanks again for your support and for creating such an incredible project!
Thanks again for your support and for creating such an incredible project!
for rendering custom viewport. this is some code that we used to render moving view.
if you want to modify the extrinsic or intrinsic, directly modify the projecting matrix and tanfovx etc before it is fed to rendering call.
"duration": 20,
i think because this moving is large. poly motion function is not enough to fit the large motions. How many points in the cloud for a 50 frames ? i know dynamic3dgs-like perframe training may be good for large motions.
Hi @lizhan17, thanks for getting back so quickly. There are 233k points in the cloud. I will try training using Dynamic3DGaussians.
BTW as you quoted the "duration: 20" part of my config, would you suggest I try change that? And if so, would you suggest a higher or lower value?
we use 50 for all the datasets. their motion is not large, but viewer is not 360 degree. that maybe different from other datasets.
if you use 20. the initial temporal displacement is 1/20, points are sparser than the default 1/50 . hard to say which is better for other paramters. ideally the model can produce a good shape.
Additionally, I have a question regarding a slight resolution discrepancy between the input image and the rendered output. Specifically, the original Painter data has a resolution of 2048x1088, but the rendered output often appears at a slightly different resolution, like 2043x1085.
Do you know what might be causing this issue?
Additionally, I have a question regarding a slight resolution discrepancy between the input image and the rendered output. Specifically, the original Painter data has a resolution of 2048x1088, but the rendered output often appears at a slightly different resolution, like 2043x1085.
You mean the raw image from the technicolor ? The camera center is a float number with many diggits. I remember we use colmap's (undistorting operation or generating point clouds by creating a sparse model in colmap) will have some loss caused by integer for width and height.
Hello, thank you for your amazing work on this project! I’ve been making good progress using the Techni full config for training on the Painter sequence. While the results are impressive for static backgrounds, I’m encountering an issue where dynamic objects (e.g., moving people) appear too blurry compared to the rest of the scene. I’ve experimented with the following, but I haven’t seen significant improvements:
Adjusting the training iterations (increased up to 200%). Tweaking various config parameters Applying different augmentation techniques, but the blurriness issue persists.
Question 1 : Could you please suggest any tips or potential adjustments I could try to resolve this issue? Are there specific config parameters or training strategies you recommend to handle dynamic objects more effectively?
And I have one more question about custom viewport rendering with camera extrinsic parameters
I’d like to render a scene from a custom viewport by manually specifying the extrinsic parameters (X, Y, Z, Yaw, Pitch, Roll). I’m looking for guidance on which parts of the code I should modify to implement this feature.
Questions 2 : Where in the codebase should I start to modify the extrinsic parameters for custom viewport rendering? How should I handle the intrinsic parameters of the camera in this context? Should I adjust the focal length, principal point, etc., or can they be left unchanged? Any advice or pointers would be greatly appreciated. Thanks again for your support and for creating such an incredible project!
Thanks again for your support and for creating such an incredible project!