Closed 78ij closed 1 year ago
I think you want to use the 360.gin config as a reference for configurations, not llff_256.gin. The LLFF scenes use NDC parameterization, and it looks like your code is using that data loader as well, which is definitely not what you want.
Great,After changing the config(mainly near and far parameters)and disabling the NDC convension i get a fairly good result after only 80000 iterations:
Below is the config i am currently using, anyone with the same issue can take it as a reference:
Config.dataset_loader = 'llff'
Config.batching = 'single_image'
Config.near = 0.2
Config.far = 1e6
Config.factor = 4
Config.batch_size = 256
Config.eval_render_interval = 5
Config.render_chunk_size = 256
Config.compute_normal_metrics = False
Config.data_loss_type = 'mse'
Config.distortion_loss_mult = 0.0
Config.orientation_loss_mult = 0.1
Config.orientation_loss_target = 'normals_pred'
Config.predicted_normal_loss_mult = 3e-4
Config.orientation_coarse_loss_mult = 0.01
Config.predicted_normal_coarse_loss_mult = 3e-5
Config.interlevel_loss_mult = 0.0
Config.data_coarse_loss_mult = 0.1
Config.adam_eps = 1e-8
Model.raydist_fn = @jnp.reciprocal
Model.opaque_background = True
Model.num_levels = 2
Model.single_mlp = True
Model.num_prop_samples = 128 # This needs to be set despite single_mlp = True.
Model.num_nerf_samples = 128
Model.anneal_slope = 0.
Model.dilation_multiplier = 0.
Model.dilation_bias = 0.
Model.single_jitter = False
Model.resample_padding = 0.01
PropMLP.warp_fn = @coord.contract
PropMLP.net_depth = 4
PropMLP.net_width = 256
NerfMLP.warp_fn = @coord.contract
NerfMLP.net_depth = 8
NerfMLP.net_width = 256
NerfMLP.net_depth_viewdirs = 8
NerfMLP.basis_shape = 'octahedron'
NerfMLP.basis_subdivisions = 1
NerfMLP.disable_density_normals = False
NerfMLP.enable_pred_normals = True
NerfMLP.use_directional_enc = True
NerfMLP.use_reflections = True
NerfMLP.deg_view = 5
NerfMLP.enable_pred_roughness = True
NerfMLP.use_diffuse_color = True
NerfMLP.use_specular_tint = True
NerfMLP.use_n_dot_v = True
NerfMLP.bottleneck_width = 128
NerfMLP.density_bias = 0.5
NerfMLP.max_deg_point = 16
I am closing this issue. Thanks for the quick reply.
@78ij What command did you use to run ref-nerf??
python -m train \ --gin_configs=configs/360.gin \ --gin_bindings="Config.data_dir = '${DATA_DIR}'" \ --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" \ --logtostderr
then
python -m render \ --gin_configs=configs/360.gin \ --gin_bindings="Config.data_dir = '${DATA_DIR}'" \ --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" \ --gin_bindings="Config.render_dir = '${DATA_DIR}/render'" \ --gin_bindings="Config.render_path = True" \ --gin_bindings="Config.render_path_frames = 480" \ --gin_bindings="Config.render_video_fps = 60" \ --logtostderr
as written in README?
Sorry for opening an issue again. I am currently training Ref-NeRF on the released real captured dataset(more precisely, on 'gardenspheres' dataset). I discovered it is in LLFF format, so i modified the config
blender-refnerf.gin
, disables the normal metric calculation, and copied some parameters fromllff_256.gin
, the final config is as follows:However, the outcome of the training is blurry:![image](https://user-images.githubusercontent.com/13299678/191878351-5f8e1e69-a581-4183-82b8-49739745cc05.png)
Could you please correct my training config, or release the config used to train the real-captured dataset? Many thanks!