Open fy99925 opened 2 weeks ago
What parameter you use?
Hi!
I actually used a derivative work of 3DGS and modified it a bit. But I think it's probably a common problem, which may be caused by some mechanism in 3DGS. But I didn't figure it out, and it confused me.
The parameters are basically the default for this job and have not been modified. The parameters are as follows:
class ModelParams(ParamGroup):
def __init__(self, parser, sentinel=False):
self.sh_degree = 3
self._source_path = ""
self._model_path = ""
self._images = "images"
self._resolution = -1
self._white_background = False
self.data_device = "cuda"
self.eval = False
self.preload_img = True
self.ncc_scale = 1.0
self.multi_view_num = 8
self.multi_view_max_angle = 30
self.multi_view_min_dis = 0.01
self.multi_view_max_dis = 1.5
super().__init__(parser, "Loading Parameters", sentinel)
def extract(self, args):
g = super().extract(args)
g.source_path = os.path.abspath(g.source_path)
return g
class PipelineParams(ParamGroup):
def __init__(self, parser):
self.convert_SHs_python = False
self.compute_cov3D_python = False
self.debug = False
super().__init__(parser, "Pipeline Parameters")
class OptimizationParams(ParamGroup):
def __init__(self, parser):
self.iterations = 30_000
self.position_lr_init = 0.00016
self.position_lr_final = 0.0000016
self.position_lr_delay_mult = 0.01
self.position_lr_max_steps = 30_000
self.feature_lr = 0.0025
self.opacity_lr = 0.05
self.scaling_lr = 0.005
self.rotation_lr = 0.001
self.percent_dense = 0.001
self.lambda_dssim = 0.2
self.densification_interval = 100
self.opacity_reset_interval = 3000
self.densify_from_iter = 500
self.densify_until_iter = 15_000
self.densify_grad_threshold = 0.0002
self.single_view_weight = 0.015
self.single_view_weight_from_iter = 7000 # 7000
self.use_virtul_cam = False
self.virtul_cam_prob = 0.5
self.use_multi_view_trim = True
self.multi_view_ncc_weight = 0.15
self.multi_view_geo_weight = 0.03
self.multi_view_weight_from_iter = 7000 # 7000
self.multi_view_patch_size = 3
self.multi_view_sample_num = 102400
self.multi_view_pixel_noise_th = 1.0
self.opacity_cull_threshold = 0.005
self.densify_abs_grad_threshold = 0.0008
self.abs_split_radii2D_threshold = 20
self.max_abs_split_points = 50_000
self.max_all_points = 6000_000
self.exposure_compensation = False
self.random_background = False
super().__init__(parser, "Optimization Parameters")
If you use in train.py "-r 1" parameter for my side it always do a random thing. If I set it like -r 2400 it ten to work.
Oh, I misunderstood. Okay, I'll try that. I did use -r1 to train.
Well, I used the -r 5605(the width of images) parameter for the training, and the results were about the same as the -r 1, both more blurred than the training results automatically cropped to 1.6k.
This is confusing for me, but thank you for your help.
I used A100 on the server to train the original resolution image (almost 5600*3600), with all parameters unchanged and 30,000 iterations. The result is blurrier than the image training that is automatically cropped to 1.6k. It is strange. I think higher resolution images may lead to better results.
I wonder about this, what is the effect of high resolution images on guassian splatting? How can I use high resolution images better in gs?