chensjtu / GaussianObject

GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting (SIGGRAPH Asia 2024, TOG)
919 stars 57 forks source link

Help Needed for Generating High-Quality 3D Models #68

Open anekha opened 1 week ago

anekha commented 1 week ago

Dear Team,

Firstly, thank you for your incredible work on this paper. I am a graduate student implementing your method for an academic project, specifically to generate 3D models of jewelry (gold and diamond rings). I have been working on this for a few months, and while I greatly admire the technique, I am struggling to achieve realistic, high-quality results and would deeply appreciate your advice.

What I’ve Tried:

I’ve attached examples of my outputs for reference.

Despite these efforts, I haven’t been able to achieve realistic results. Given the nature of jewelry (reflective, detailed, and translucent), I wonder if I’m missing some crucial steps or adjustments.

Thank you so much for taking the time to help. Any advice would mean a lot, and I’d be happy to provide additional details if needed.

Optimization parameters

optimization_params = {
    'max_num_splats': 500_000_000,  # Maximum number of splats for densifying
    'iterations': 30_000,  # Total number of iterations for training
    'position_lr_init': 0.00030,  # Initial learning rate for position optimization
    'position_lr_final': 0.0000016,  # Final learning rate for position optimization
    'position_lr_delay_mult': 0.01,  # Delay multiplier for learning rate change
    'position_lr_max_steps': 30_000,  # Maximum number of steps for position optimization
    'feature_lr': 0.0030,  # Learning rate for feature optimization
    'opacity_lr': 0.01,  # Learning rate for opacity optimization
    'scaling_lr': 0.005,  # Learning rate for scaling optimization
    'rotation_lr': 0.001,  # Learning rate for rotation optimization
    'percent_dense': 0.10,  # Percentage of density to stop splitting/clone Gaussian
    'lambda_dssim': 1.0,  # Weight for DSSIM loss
    'lambda_silhouette': 0.02,  # Weight for silhouette loss
    'densification_interval': 50,  # Interval between densification steps
    'opacity_reset_interval': 1000,  # Interval for resetting opacity
    'remove_outliers_interval': 2000,  # Interval for removing outliers
    'densify_from_iter': 500,  # Start densification after this iteration
    'densify_until_iter': 20000,  # Stop densification after this iteration (60% of 10,000)
    'densify_grad_threshold': 0.00005,  # Gradient threshold for densification
    'start_sample_pseudo': 400000,  # Start sampling pseudo data (not used)
    'end_sample_pseudo': 1000000,  # End sampling pseudo data (not used)
    'sample_pseudo_interval': 10,  # Interval for pseudo-sampling (not used)
    'random_background': False,  # Flag to use random background in rendering
    'pose_iterations': 20000,  # Number of iterations for pose optimization
}
001

https://github.com/user-attachments/assets/151bb57f-ff3f-45d6-a013-a06c8c5de3e6

https://github.com/user-attachments/assets/a87df08b-dcc3-4905-b3b1-131cdc1058a3

7rwang commented 6 days ago

I guess one possible reason is, since u used mast3r to predict Coarse pcd. The predicted quality of pcd depends highly on your provided images. In other words, the more the pixels belong to the object to be reconstructed exist in ur image, the better the predicted pcd is.